Blanchet’s choice to block critics on Twitter limits free speech: experts

Snowflake?

Dozens of people — including some MPs — say Bloc Québécois Leader Yves-François Blanchet has blocked them on Twitter after they criticized his statements about Transport Minister Omar Alghabra, with some arguing they have a right to be heard.

Nour El Kadri, the president of the Canadian Arab Federation who was among those blocked by Blanchet on the social media platform, said people should be able to respond to accusations made by politicians.

Last week, after Alghabra, born to a Syrian family in Saudi Arabia, was sworn in as federal transport minister, the Bloc issued a release that sought to sow doubt about his association with what it called the “political Islam movement” due to the minister’s former role as head of the Canadian Arab Federation.

El Kadri tweeted at Blanchet to say the Canadian Arab Federation has been a secular organization under its constitution since it was founded in 1967.

“(I told him) it’s secular like Quebec that you’re asking for, then he blocked me,” he said.

“He started to block other people who were voicing opposing opinions.”

On Twitter, Blanchet argued against the idea that he was robbing anyone of their right to free expression.

“When I ‘block’ people, it’s because their posts don’t interest me (fake accounts, political staff, insults …),” he wrote in French last Thursday.

“That does not prevent them from publishing them. I just won’t see them, nor they mine,” he said, adding things are calmer this way.

Richard Moon, a law professor at the University of Windsor, said it is credible to claim that Blanchet infringed the charter-guaranteed right to freedom of expression of those who can no longer see or comment on his tweets.

While Twitter is not itself subject to the charter as a private entity, Moon said, when a politician uses it as a platform to make announcements and discuss political views, the politician’s account becomes a public platform.

“To exclude someone from responding or addressing because of their political views could then be understood as a restriction on their freedom of expression,” he said.

Duff Conacher, co-founder of the pro-transparency group Democracy Watch, said Blanchet’s Twitter account is a public communication channel and he cannot decide arbitrarily to not allow voters to communicate with him there.

“Politicians are public employees, so they can’t just cut people off from seeing what they’re saying through one of their communication channels,” he said.

“The public has a right to see all their communications.”

Ottawa Mayor Jim Watson faced a lawsuit in 2018 from local activists he had blocked on Twitter. The court action was dropped after Watson conceded his account is public and unblocked everyone he had blocked, so no legal precedent was set.

Blanchet has also blocked some fellow members of Parliament, including Quebec Liberal Greg Fergus, Ontario New Democrat Matthew Green and Manitoba New Democrat Leah Gazan.

Green said he found himself blocked when he criticized Blanchet’s statements that defended the right of universities’ professors to use the N-word.

Gazan accused Blanchet last week of racism and Islamophobia over his statement about Alghabra.

“When criticized, he refuses to engage in a conversation, and a conversation he clearly needs to have around his Islamophobia,” she said.

Source: Blanchet’s choice to block critics on Twitter limits free speech: experts

What an Inclusive Recovery from the COVID-19 “Economic Firestorm” Could Look Like: Ethnic and Mainstream Media comparison

Latest overview of ethnic media coverage and mainstream comparison, showing relatively small differences:

Paid sick leave, affordable childcare, reform of the Employment Insurance system, better-quality jobs and higher minimum wage are some of the elements needed to ensure an inclusive recovery from the COVID-19 pandemic, which has hit visible minorities and immigrants the hardest, according to ethnic media coverage of the economic impacts of COVID-19.

Especially early into the pandemic, visible minorities and recent immigrants were more impacted by job losses, inability to meet financial obligations and essential needs than white Canadians and long-term immigrants or Canadian-born population, showed several studies cited in the media, as analyzed from May to December 2020.

The July Labour Force Survey (for the first time based on data disaggregated by race and visible minority status) showed that the unemployment rate was higher for South Asian, Arab, and Black Canadians, which Statistics Canada linked to higher representation of these minorities in hard-hit industries such as food services and retail. Immigrant women were also shown to be disproportionately affected by the pandemic.

Questions around lockdowns

As the second wave of the pandemic brought with it new lockdowns (Toronto and Peel region moved into lockdown on November 23, and a province-wide shutdown in Ontario has been in effect since December 26), the media gave voice to those questioning the effectiveness of such measures in places where most infections happen in industrial and essential workplace settings, like the city of Brampton.

Mayor of Brampton Patrick Brown was one of the most often cited critical voices, who called the forced closure of small businesses “tinkering around the edges.” Multiple outlets cited Brown as saying that the lockdown in Peel Region was not likely to dramatically reduce the number of new COVID-19 infections in Brampton without other supports in place: better sick benefits, an isolation centre, and better access to testing.

He stressed that staff in factories and front-line workers lose their paycheque if they do not come to work, so many are forced to choose between going to work with symptoms and making the rent payment or putting food on the table.

In late November, Brown made headlines with an appeal by a group of Greater Toronto and Hamilton Area (GTHA) mayors to the province of Ontario for sick leave benefits for front-line workers. Brampton mayor called the benefits “a missing link” in the pandemic response. As reported, the mayors also asked the provincial government to sign an agreement with employers, reassuring employees that they would not lose their jobs or their salary if they tested positive for COVID-19.

Pressure for sick days came from many sides. A widely cited September report by the researcher ICES found not only that immigrants, refugees and other newcomers accounted for a whopping 44 per cent of all COVID-19 cases in Ontario in the first half of 2020 but also that many immigrants and refugees faced systemic inequities including lower pay and precarious employment without the right to sick leave.

The systemic inequities like the fact that many essential workers cannot afford to self-isolate away from their families need to be addressed, Regional Councillor in Brampton Rowena Santos said in an interview with one of the outlets in November, calling for better access to healthcare, higher quality jobs, sick days and higher minimum wage.

In late November, the media carried a message from Health Minister Patty Hajdu, who said the federal government was working with provinces and territories on sick leave. She admitted it was necessary to have low-barrier access to Employment Insurance (EI) for those working on the front lines, and that workers can be eligible for EI with 170 hours of work.

Calls for EI reform

Problems with accessing EI, especially by underemployed workers and expectant mothers for whom the pandemic-induced job cuts meant not enough working hours to qualify for benefits, prompted calls for the reform of the outdated EI system early on.

A Workers’ Action Centre activist cited in ethnic media in August pointed to the situation of the underemployed, especially restaurant staff and people in the tourism industry, who did not have working hour guarantees in their contracts and who may not be able to obtain a record of employment to access EI when the Canada Emergency Response Benefits (CERB) end. He also pointed to self-employed workers such as Uber drivers or people working in food delivery services

“She-covery” and the importance of childcare

Women, especially racialized women, are over-represented in precarious, low-paying jobs, so the COVID-19 pandemic has had a disproportionate economic impact on them, as demonstrated by various reports cited in multiple ethnic media outlets. A September report by the Ontario Chamber of Commerce entitled “The She-Covery Project” pointed out that women’s labour participation rate had fallen to its lowest in 30 years.

Reports that female immigrants, especially working in health care, were hit especially hard by the pandemic have prompted calls for policies instituting higher pay, paid sick leave, universal childcare and eldercare, and affordable housing.

Since mothers were usually the ones losing their jobs or staying home to take care of the children during the pandemic, the central role of affordable daycare in the economic recovery plans was stressed by the media and the policymakers alike, including in a slew of December media appearances by the Minister of Families, Children and Social Development, Ahmed Hussen. Hussen promised the federal government would create a nationwide childcare program, with details to come in the spring of 2021.

“Shop local” campaign to support small businesses

The struggles of small businesses, often owned by immigrants or visible minorities, also featured strongly in ethnic media coverage, with the newest lockdowns bringing renewed fears of severe economic impacts, but few solutions in sight.

The media stressed that while small businesses like hair salons were forced to close their doors, big retailers like Amazon were allowed to operate. One of the victims of the pandemic featured in October was a Black owner of a beauty parlour who was ineligible for government support, as she had opened her salon only in 2020.

The prospects for small businesses appeared bleak yet in August. Jon Shell, managing director at Social Capital Partners and a co-founder of the Save Small Business campaign, was cited as saying that “the recovery looks like it will be very weak for local community businesses, making additional cash flow hard to come by over the rest of the year. Many will not survive.”

Patrick Brown admitted back in May that the pandemic was an “economic firestorm,” and the small stores and businesses were especially badly affected. He called on Brampton residents to support them by shopping locally and ordering take-out food from restaurants in their neighbourhoods. A similar appeal by Ontario Premier Doug Ford was aired in October. The media also reported on Ontario’s NDP Leader Andrea Horwath’s Save Main Street plan, supported by the Canadian Black Chamber of Commerce (CBCC).

The government’s commercial rent assistance program was criticized as ineffective: few landlords decided to participate, as that would have forced them to cover 25 per cent of the rent.

Coverage of other government programs addressed to small businesses was rather limited. Apart from announcements of subsequent extensions of the wage subsidy program, the Canada Emergency Business Account was mentioned only once in a collection of around 200 media clippings—in the context of the government’s recovery plan presented in early December by Minister Hussen.

Comparative analysis with mainstream media

The analysis of Toronto Star coverage was focused on the pandemic’s impact on small businesses. More than half of the articles discussing challenges faced by different types of businesses showcased those owned by immigrants and many told their stories of going through the painful process of closing down permanently.

A lot of coverage was also devoted to government measures and how businesses can access them, for example the Canada Emergency Business Account. Different polls and appeals from business advocacy groups and other stakeholders for the government to do more to help small business owners were also featured.

Like ethnic media, the paper discussed the unfair advantage during lockdown of big-box stores over small businesses. Unlike ethnic media, it also covered the spike in insurance premiums as one of the key factors that forced many businesses to shut down.

In terms of navigating the difficulties of the pandemic, the Star also presented various innovations such as ghost kitchens, a business incubator called District Ventures Kitchen, and other new approaches to doing business in food service. 

Insight from MIREMS media monitoring

Ethnic media “can be expected to become an important voice for ethnically inclusive recovery initiatives,” commented Silke Reichrath, Editor-in-Chief at MIREMS.

“The coverage showed time and again how newcomers often work in essential jobs, which makes them more susceptible to virus exposure,” she stressed. Sectors in focus that rely heavily on newcomers included the taxi industry, the hotel and tourism sector, meat processing plants, long-term care and health care.

Overall, ethnic media have kept their audiences informed about the latest public health guidelines about business openings and closures and about benefits and aid programs available from the three levels of government, Reichrath said.

“They have also raised awareness in general about how the pandemic is affecting the national and local economy, have featured charitable initiatives by the community, and have encouraged community members to support local businesses by buying local, particularly from smaller businesses,” she added.

Methodology: This ethnic media analysis is based on a selection of 200 summaries of articles and broadcast segments in radio, TV, print and web sources between May and December, 2020, with special focus on the last six months of last year. These summaries were found in 450 active ethnic media sources monitored by MIREMS. 

For mainstream media analysis, the ProQuest Databases Platform was searched using the keywords “business owners” and “COVID-19.” A total of 181 articles published in Toronto Star from July 1 to Dec. 31, 2020 were included for review.

Source: https://newcanadianmedia.ca/what-an-inclusive-recovery-from-economic-business-firestorm-of-covid-19-could-look-like/#ethnic-m

Rejection letter ESDC sent to Black organizations ‘completely unacceptable’: Hussen

Oops!

Several Black organizations were denied federal funding through a program designed to help such groups build capacity — after Employment and Social Development Canada told them their leadership was not sufficiently Black.

Velma Morgan, the chair of Operation Black Vote, said her group received an email from the department on Tuesday saying their application did not show “the organization is led and governed by people who self-identify as Black.”

The department sent a second email the next day, saying their applications were not approved because it did not receive “the information required to move forward,” she said.

“As if we’re incompetent or foolish and we’re going to believe the second email over the original email,” Morgan said in an interview with The Canadian Press.

She said Operation Black Vote, a not-for-profit, multi-partisan organization that aims to get more Black people elected at all levels of government, is one of at least five Black organizations that were not approved for funding.

The program, called the Supporting Black Canadian Communities Initiative, provides funding to Canadian Black-led non-profit and charitable organizations to help them build capacity. The applications guidelines say at least two-thirds of the leadership and the governance structure must be people who self-identify as Black. The mandate of the organization must also be focused on serving Black communities.

Morgan said everyone on her team is Black. She also said the other organizations she knows about should also not have been rejected for the reason outlined in the first letter.

“If you’re from the Black community, you know that they’re Black-run and Black-focused,” she said.

Social Development Minister Ahmed Hussen said the initial letter his department sent to unsuccessful applicants was “completely unacceptable” and that he demanded a retraction as soon as he saw it.

In a thread on Twitter Thursday night, Hussen said he discussed with his department’s officials how such a mistake could have happened and implemented measures to make sure it does not happen again.

“I will continue to work with Black Canadian organizations to improve our systems,” said Hussen, who also mentioned the systemic barriers he has faced as Black person.

The department has not yet responded to a request for comment.

Morgan said the Liberal government should hire more Black people to sit at every decision-making table.

“This is an example of what happens when we don’t have representation,” Morgan said.

The Ontario Black History Society, a registered charity dedicated to study, preservation and promotion of Black history and heritage, is one of the groups that received both letters and had its application rejected. In an emailed statement, the organization said ESDC did not provide any reasons for why they were declined outside the two letters.

Former MP Celina Caesar-Chavannes, who left the Liberal caucus several months before the 2019 election to sit as an Independent, said many of the organizations she knows did not receive funding do not want to say anything publicly. She said they are worried speaking out will lead to the government denying them other funding chances.

“Why should these organizations be afraid of trying to speak up when something goes wrong?” said Caesar-Chavannes, who posted copies of the ESDC letters to Twitter after receiving them from the organizations that had received them.

“That’s the problem with how the government operates.”

Morgan said the letter also came after months of waiting, as her organization applied to get support to purchase equipment and retrofit its facilities in June. She said organizations were told they would get an answer in September but did not hear back until this week when they received the first letter.

“We hardly get any money from the government at all,” she said, while adding the rejection will not affect her group’s ability to operate.

“There are organizations that work with the most vulnerable in our community in terms of mental health or poverty, and those are the kinds of organizations that need the capacity funding.”

Caesar-Chavannes said that the number of organizations that contacted her has grown since she posted about the issue on Twitter.

“It’s dehumanizing that we have to keep proving (our Blackness.) How many different hurdles that we have to jump through?” she said.

Source: Rejection letter ESDC sent to Black organizations ‘completely unacceptable’: Hussen

As Canadians we’re proud of diversity, so why is multicultural media being left in the dark about COVID-19

While I agree that more can and should be done, one of my observations from tracking ethnic media coverage of the 2019 election campaign was that much of their coverage reflected articles in the mainstream media, and those who relied on ethnic media would be reasonable informed on the electoral platforms and choices.

It may be more a matter of resources than anything else but would be nice to know what governments are doing to publicize COVID health related information on ethnic media:

After writing my last op-ed on the underutilization of multicultural media to disseminate clear COVID-19 information, I’ve received an overwhelming response.

Some messages were from physicians and public health officials interested in utilizing these platforms to inform communities on how to stay safe. Others were a nod of acknowledgment from the Canadian public who finally felt seen and heard. And a lot of them were questions regarding why such important platforms remained underutilized when they could have been important tools to disseminate critical life saving information.

One of the things we are most proud of as Canadians is multiculturalism, yet, there’s a divide: a lack of ethnic and linguistic diversity on mainstream media. This is why multicultural and ethnic media is a much needed voice for minority communities across Canada. Along with providing language and culturally sensitive critical health information and public communication, these mediums foster a sense of culture, and community for the minority and immigrant Canadians.

While these media outlets can be very important for people with no knowledge of English or French, these platforms do more than address language barriers. For many Canadians, it’s a platform to help stay connected to one’s culture and heritage and is a heavily relied upon source of information.

The problem? These platforms can play a substantial role in sharing life-saving critical health information, and have proven to do so with information around cancer pre-pandemic. So why aren’t they getting the clear COVID-19 precaution information now?

Firstly, there is a lack of awareness. What emerged from my discussions with many physician colleagues is that many were unaware these channels existed. At the medical school education level, there needs to be better knowledge dissemination about the importance of these community platforms and how multicultural media can be leveraged to provide health related information to the public.

Secondly, there isn’t a clear bridge between mainstream and multicultural media. Mainstream media needs to do a better job at supporting and amplifying the voices of multicultural media platforms. This could be done by hosting multicultural media representatives on mainstream shows and vice versa. Moreover, government and public health bodies need to develop two-way streets with multicultural media outlets and have an ongoing regular communication with these media representatives.

Thirdly, after speaking to various multicultural media spokespersons, I learned that there is a lack of funding and financial support, particularly for the radio show channels. Their hands are tied and they have to heavily rely on advertisements to cover their expenses and are unable to afford the latest technology or means to be on par with popular mainstream outlets. Their sole profit sometimes is from advertisements; some of these advertisements can be alternative care providers or various sources in radio, TV, and print media. As part of the advertisement package, it’s hard for media channels to control knowledge dissemination. This as one can imagine then can be a source of misinformation on top of an already existing information vacuum due to underutilization of the media platforms which is exponentially dangerous.

We as Canadians are proud of our multiculturalism and public health care system and therefore it is heartbreaking to hear that multicultural media struggles to thrive. It’s an important vehicle to deliver health related and public communication to all Canadians. It is critical for us to engage multicultural and ethnic media to ensure pandemic messaging reaches to everyone nationally.

As we combat the second wave, develop an inclusive vaccination strategy, and disseminate vaccine and COVID-19 related information, it’s still not too late to incorporate linguistic and culturally sensitive print, radio and TV media outlets in our armamentarium to deliver critical health related information.

Source: As Canadians we’re proud of diversity, so why is multicultural media being left in the dark about COVID-19

Black Conservatives seek to mobilize more support in wake of Leslyn Lewis’ success

To watchÈ

Black Conservatives energized by the rising star of Leslyn Lewis hope to use her unexpectedly robust leadership bid to bolster Black representation in the party’s ranks.

The relaunch of one formal group of Black Conservatives and the ramped-up efforts of another come as the Conservative Party of Canada faces pressure to more firmly denounce those within its ranks who display, or even appear to display, extreme right-wing positions similar to those on full and deadly view during the riots in Washington, D.C.

Party leader Erin O’Toole’s promise to get more “Canadians to see a Conservative when they look in the mirror” requires acknowledging the party falters when talking about race, said Akolisa Ufodike, the national chair of the Association of Black Conservatives, a group that formed last year.

“High level, he’s saying that we need to be seen as a more inclusive party so how does he get there without confronting the issue?” he said.

Ufodike said one reason his group formed is to highlight what he sees as a long and proud history of inclusivity by the movement, which he said is a message some within the Black community might be more open to hearing when it comes from Black Conservatives themselves.

The group ignited a firestorm during the leadership race last year, when Lewis was making history by becoming the first Black woman to run for leadership of the party.

Despite entering as a relative unknown, she saw her campaign steadily increase in support thanks in no small part to the throngs of social conservatives attracted to her positions on topics they hold dear.

But her candidacy also suggested to many the party wasn’t entirely the bastion of what former prime minister Stephen Harper once infamously referred to as “old stock Canadians.”

The association, however, endorsed O’Toole instead of Lewis. That led to Lewis publicly slamming the group, a heated conversation between her campaign and O’Toole’s campaign and a decision by his team to decline the endorsement.

Ufodike said to have endorsed Lewis solely because she was Black would be reducing the issue to identity politics.

“We look more at how their policies, their readiness and ability to lead can best serve Canadians, including marginalized communities such as the Black community,” he said.

Lewis ultimately finished third in the race, though in certain regions of the country she had more support at one point than either O’Toole or party stalwart Peter MacKay.

Among her efforts to remain in political life, which includes running in the next election in a safe Ontario seat, was work to revive a group she helped form in 2009: the Conservative Black Congress.

Its chair, Tunde Obasan, denied the group was set up solely in response to leadership race politics.

“Our main focus is to support candidates, even if they are not front-runners,” he said.

” … The more we do that, and the more we get candidates who are from the Black community, the more people who are not currently fine with the party, the more they begin to see the party as for everyone.”

At its formal relaunch Jan. 24, the group plans to unveil a parliamentary internship program named after retired senator Donald Oliver, the first Black Canadian man appointed to the Senate.

The Association of Black Conservatives, meanwhile, has been busy setting up provincial chapters to also support community and civic participation at the local levels.

It is not uncommon, both groups said, to find themselves forced to answer for the Conservatives’ past perceived sins and its more contemporary ones.

Among them, the “barbaric cultural practices” tip line the Harper Conservatives proposed in the 2015 election campaign, O’Toole’s refusal to acknowledge the existence of systemic racism during the leadership race, and those who leap at any chance to infer the same vein of intolerance running through the U.S. Republicans also runs through Canadian conservatives.

Recently, O’Toole’s office engaged with right-wing organization Rebel Media, sending answers via email in O’Toole’s name. Many Conservatives cut ties with the organization several years ago after inflammatory and derogatory comments by its staff.

Among its more recent reporting has been the repetition of the discredited claims the U.S. election was stolen from the Republicans, claims that led to the deadly riots in D.C.

O’Toole’s office said this week he won’t speak to Rebel Media in the future.

The strength of the party’s right wing is likely to become evident at the upcoming March policy convention. Conservative MP Derek Sloan, who finished the leadership race in fourth place, was actively encouraging his own social conservative supporters to turn out in large force to have a role in the debate.

For now, neither Black organization has committed to getting formally involved at the convention, despite it being a potential avenue to influence policy decisions or the nuts and bolts of the party’s operations.

Both groups said they are looking for direct and clear leadership from O’Toole on putting his promise of making the party more inclusive into practice.

“What I would like to see him do is to be deliberate about it, on how to support more participation from the racialized community, not only in the Black community, from the entire racialized community,” said Obasan.

“That will go a long way.”

Source: Black Conservatives seek to mobilize more support in wake of Leslyn Lewis’ success

Review finds successes, failures in Liberals’ feminist aid approach in Afghanistan

More failures than successes. Money quote: “…failure to ensure Canada’s attempts to increase gender equality included “a deeper understanding of Afghanistan’s local cultural context and Islamic tradition.””

An internal review of the nearly $1 billion in foreign aid that Canada quietly spent in Afghanistan after the Canadian military pulled out has found some successes but also many failures — especially when it comes to helping women and girls.

The Global Affairs Canada review covers the period between 2014 and 2020, during which Afghanistan remained a top destination for Canadian aid dollars even after the last Canadian troops had left and public attention drifted elsewhere.

Published on the department’s website late last month, the reviewers’ final report comes amid another round of peace talks between the Afghan government and Taliban to end decades of nearly continuous fighting in the country.

It also follows a Canadian commitment in November to contribute another $270 million in aid over the next three years to Afghanistan, adding to the heavy investment that Canada has already made in the country since 2001.

The reviewers found that the $966 million in Canadian foreign aid spent since 2014 was almost entirely focused on empowering and supporting Afghan women and girls, particularly after the Liberals launched their feminist-aid policy in 2017.

Those efforts led to some tangible progress, including the adoption of gender equality in some Afghan institutions, a decrease in violence against women in some communities, more educational opportunities for girls and better health-care services for both.

“Projects in the womens’ and girls’ rights and empowerment sector resulted in female beneficiaries becoming more active, confident and self-sufficient,” adds the reviewers’ report.

Yet the review, which included analyzing internal Global Affairs documents and interviews with Canadian, Afghan and international government staff and NGOs as well as average Afghans affected by the projects, found many problems as well.

Chief among them was a failure to ensure Canada’s attempts to increase gender equality included “a deeper understanding of Afghanistan’s local cultural context and Islamic tradition.” It also failed to include men and boys in its programs.

“The definition of gender roles was so central to Afghan society and culture during the period that any planned changes required not only consultation with male household members, but also with the larger community,” the report said.

Those shortcomings threatened to leave the perception of gender equality being imposed on Afghans, the report said, adding: “If not carefully managed, there was the risk that gender-equality efforts promoted by Western donors could lead to backlashes and harm.”

The reviewers cited several examples, such as women who used shelters to escape domestic violence being shunned by their families and women in the Afghan army facing direct threats, as among the unintended consequences of current efforts.

Memorial University foreign aid expert Liam Swiss, who has written extensively on the Liberals’ feminist approach to foreign aid, said the report’s findings reflected many of the concerns and criticisms that were voiced when the policy was first launched.

That includes a one-size-fits-all strategy that didn’t take into account the local conditions and culture in the countries where Canadian aid is being channelled — of which Afghanistan is one of the most difficult.

“That’s the problem when you kind of stake out a really broad set of priorities on your aid,” Swiss said. “If you’re trying to make them apply to all and to everywhere, you’re going run into a lot of issues of local appropriateness, local receptivity.”

The reviewers also suggested that Canada was guilty of the same sins as many of its western counterparts in Afghanistan, namely focusing its aid dollars on areas that it was more interested in than what was really needed in the country.

That was reflected in the lack of consultations with local communities and a limited consideration for the specific needs of the many different ethnic and religious communities in Afghanistan, which undermined their effectiveness and sustainability.

In fact, the reviewers found Canada did not actually have a strategy for its engagement in Afghanistan. Global Affairs also failed to adapt to the changing needs and environment as the Afghan government lost territory to the Taliban between 2017 and 2020.

The report instead paints a picture of Canadian diplomats and aid workers keeping their eyes firmly glued on their own priorities even as the Taliban was wresting more and more of the country away from Kabul.

To that end, the reviewers said nearly all of those interviewed as part of their study believed the progress made by Canadian aid efforts over the years will be threatened or completely undone if security in the country deteriorates further.

That possibility continues to loom over Afghanistan’s future amid the peace talks and as the world waits to see whether incoming U.S. president Joe Biden will continue the Trump administration’s work to withdraw American forces from the country.

Global Affairs spokeswoman Patricia Skinner said while the report shows progress has been made in Afghanistan, the department will address the reviewers’ six recommendations — including changing how it promotes gender equality — over the next two years.

Nipa Banerjee, who previously led Canadian aid efforts in Afghanistan before joining the University of Ottawa, said she hopes the review will lead to changes – including a more expansive approach.

“With all the insecurity and everything, shouting about women’s rights only, it’s not going to be very helpful,” Banerjee said.

“And Afghans themselves think that. They’re saying it is important, but without security and without political order, nothing will succeed. Women’s programs will not go anywhere. So there has to be compromises.”

Source: Review finds successes, failures in Liberals’ feminist aid approach in Afghanistan

Huawei patent mentions use of Uighur-spotting tech

Not that surprising…

A Huawei patent has been brought to light for a system that identifies people who appear to be of Uighur origin among images of pedestrians.

The filing is one of several of its kind involving leading Chinese technology companies, discovered by a US research company and shared with BBC News.

Huawei had previously said none of its technologies was designed to identify ethnic groups.

It now plans to alter the patent.

Forced-labour camps

The company indicated this would involve asking the China National Intellectual Property Administration (CNIPA) – the country’s patent authority – for permission to delete the reference to Uighurs in the Chinese-language document.

Uighur people belong to a mostly Muslim ethnic group that lives mainly in Xinjiang province, in north-western China.

Government authorities are accused of using high-tech surveillance against them and detaining many in forced-labour camps, where children are sometimes separated from their parents.

Beijing says the camps offer voluntary education and training.

“One technical requirement of the Chinese Ministry of Public Security’s video-surveillance networks is the detection of ethnicity – particularly of Uighurs,” said Maya Wang, from Human Rights Watch.

“While in the rest of the world, such targeting and persecution of a people on the basis of their ethnicity would be completely unacceptable, the persecution and severe discrimination of Uighurs in many aspects of life in China remain unchallenged because Uighurs have no power in China.”

Body movements

Huawei’s patent was originally filed in July 2018, in conjunction with the Chinese Academy of Sciences .

It describes ways to use deep-learning artificial-intelligence techniques to identify various features of pedestrians photographed or filmed in the street.

It focuses on addressing the fact different body postures – for example whether someone is sitting or standing – can affect accuracy.

But the document also lists attributes by which a person might be targeted, which it says can include “race (Han [China’s biggest ethnic group], Uighur)”.

A spokesman said this reference should not have been included.

“Huawei opposes discrimination of all types, including the use of technology to carry out ethnic discrimination,” he said.

“Identifying individuals’ race was never part of the research-and-development project.

“It should never have become part of the application.

“And we are taking proactive steps to amend it.

“We are continuously working to ensure new and evolving technology is developed and applied with the utmost care and integrity.”

‘Confidential’ document

The patent was brought to light by the video-surveillance research group IPVM.

It had previously flagged a separate “confidential” document on Huawei’s website, referencing work on a “Uighur alert” system.

In that case, Huawei said the page referenced a test rather than a real-world application and denied selling systems that identified people by their ethnicity.

On Wednesday, Tom Tugendhat, who chairs the UK Parliament’s Foreign Affairs Select Committee and leads the Conservative Party’s China Research Group, told BBC News: “Chinese tech giants supporting the brutal assault on the Uighur population show us why we as consumers and as a society must be careful with who we buy our products from or award business to.

“Developing ethnic-labelling technology for use by a repressive regime is clearly not behaviour that lives up to our standards.”

Facial-recognition software

IPVM also discovered references to Uighur people in patents filed by the Chinese artificial-intelligence company Sensetime and image-recognition specialist Megvii.

Sensetime’s filing, from July 2019, discusses ways facial-recognition software could be used for more efficient “security protection”, such as searching for “a middle-aged Uighur with sunglasses and a beard” or a Uighur person wearing a mask.

A Sensetime spokeswoman said the references were “regrettable”.

“We understand the importance of our responsibilities, which is why we began to develop our AI Code of Ethics in mid-2019,” she said, adding the patent had predated this code.

Ethnic-labelling solutions

Megvii’s June 2019 patent, meanwhile, described a way of relabelling pictures of faces tagged incorrectly in a database.

It said the classifications could be based on ethnicity, for example, including “Han, Uighur, non-Han, non-Uighur and unknown”.

The company told BBC News it would now withdraw the patent application.

“Megvii recognises that the language used in our 2019 patent application is open to misunderstanding,” it said.

“Megvii has not developed and will not develop or sell racial- or ethnic-labelling solutions.

“Megvii acknowledges that, in the past, we have focused on our commercial development and lacked appropriate control of our marketing, sales, and operations materials.

“We are undertaking measures to correct the situation.”

Attribute-recognition model

IPVM also flagged image-recognition patents filed by two of China’s biggest technology conglomerates, Alibaba and Baidu, that referenced classifying people by ethnicity but did not specifically mention the Uighur people by name.

Alibaba responded: “Racial or ethnic discrimination or profiling in any form violates our policies and values.

“We never intended our technology to be used for and will not permit it to be used for targeting specific ethnic groups.”

And Baidu said: “When filing for a patent, the document notes are meant as an example of a technical explanation, in this case describing what the attribute-recognition model is rather than representing the expected implementation of the invention.

“We do not and will not permit our technology to be used to identify or target specific ethnic groups.”

But Human Rights Watch said it still had concerns.

“Any company that sells video-surveillance software and systems to the Chinese police would have to ensure that they meet the police’s requirements, which includes the capacity for ethnicity detection,” Ms Wang said.

“The right thing for these companies to do is to immediately cease their sale and maintenance of surveillance equipment, software and systems, to the Chinese police.”

Source: Huawei patent mentions use of Uighur-spotting tech

Bloc takes aim at new transport minister over ‘Islamic movement’ ties

Playing ugly identity politics:

The Bloc Québécois is seeking to sow doubt about Canada’s new Transport Minister Omar Alghabra over his association with what it calls “the political Islamic movement.”

Leader Yves-François Blanchet said in a release that “questions arise” due to the minister’s former role as head of the Canadian Arab Federation.

But the Bloc leader said he “refuses to accuse” the minister of anything specific.

Alghabra was the federation’s president before being elected as a Toronto-area Liberal MP in 2006.

Rather than make specific accusations, the Bloc linked to a 2016 article by a right-wing Quebec newspaper columnist that made implications about Alghabra’s past.

“It’s really questions about his past and also the separation of church and state, which is a profound value for the Bloc,” said spokesman Julien Coulombe-Bonnafous.

“We don’t want to raise any accusations, because I don’t think there’s that much.”

In 2009, then-citizenship and immigration minister Jason Kenney opted to cut funding for the Canadian Arab Federation, whose leader at the time made statements that Kenney called anti-Semitic and supportive of terrorist groups.

The Bloc’s attempt to undermine confidence in Alghabra, who was sworn in as transport minister Tuesday, follows his move to distance himself from a YouTuber who has expressed intolerant views toward LGBTQ communities.

Alghabra said in a statement Tuesday night he is a longtime advocate for LGBTQ rights and was “shocked and disappointed” to learn of a video using homophobic slurs that was posted online by Fadi Younes, whose digital marketing agency Alghabra had hired on a contract that has since been terminated.

“I was not aware of these comments before today and I wholly reject them,” said the MP for Mississauga Centre.

“We must combat ignorance, hate or intolerance in our society. I will continue to support LGBTQ rights, as we continue to build a more inclusive and tolerant society for everyone.”

Alghabra has been subjected to innuendo about his background before.

In 2018, Conservative Sen. Denise Batters apologized to Alghabra, who was born in Saudi Arabia, after she wondered aloud why the then-parliamentary secretary for the foreign affairs minister wasn’t questioned about his place of birth while speaking with the media about Canada’s diplomatic dispute with the country at the time.

“Senator, I’m a proud Canadian who is consistent in defending human rights. How about you?” Alghabra tweeted in response to a Twitter post from Batters.

The next day, he tweeted that she had called to apologize, saying he accepted the gesture and said Batters had told him “this is a lesson to all of us.”

Source: Bloc takes aim at new transport minister over ‘Islamic movement’ ties

From facial recognition, to predictive technologies, big data policing is rife with technical, ethical and political landmines

Good long read and overview of the major issues:

In mid-2019, an investigative journalism/tech non-profit called MuckRock and Open the Government (OTG), a non-partisan advocacy group, began submitting freedom of information requests to law enforcement agencies across the United States. The goal: to smoke out details about the use of an app rumoured to offer unprecedented facial recognition capabilities to anyone with a smartphone.

Co-founded by Michael Morisy, a former Boston Globe editor, MuckRock specializes in FOIs and its site has grown into a publicly accessible repository of government documents obtained under access to information laws.

As responses trickled in, it became clear that the MuckRock/OTG team had made a discovery about a tech company called Clearview AI. Based on documents obtained from Atlanta, OTG researcher Freddy Martinez began filing more requests, and discovered that as many as 200 police departments across the U.S. were using Clearview’s app, which compares images taken by smartphone cameras to a sprawling database of 3 billion open-source photographs of faces linked to various forms of personal information (e.g., Facebook profiles). It was, in effect, a point-click-and-identify system that radically transformed the work of police officers.

The documents soon found their way to a New York Times reporter named Kashmir Hill, who, in January 2020, published a deeply investigated feature about Clearview, a tiny and secretive start-up with backing from Peter Thiel, the Silicon Valley billionaire behind Paypal and Palantir Technologies. Among the story’s revelations, Hill disclosed that tech giants like Google and Apple were well aware that such an app could be developed using artificial intelligence algorithms feeding off the vast storehouse of facial images uploaded to social media platforms and other publicly accessible databases. But they had opted against designing such a disruptive and easily disseminated surveillance tool.

The Times story set off what could best be described as an international chain reaction, with widespread media coverage about the use of Clearview’s app, followed by a wave of announcements from various governments and police agencies about how Clearview’s app would be banned. The reaction played out against a backdrop of news reports about China’s nearly ubiquitous facial recognition-based surveillance networks.

Canada was not exempt. To Surveil and Predict, a detailed examination of “algorithmic policing” published this past fall by the University of Toronto’s Citizen Lab, noted that officers with law enforcement agencies in Calgary, Edmonton and across Greater Toronto had tested Clearview’s app, sometimes without the knowledge of their superiors. Investigative reporting by the Toronto Star and Buzzfeed News found numerous examples of municipal law enforcement agencies, including the Toronto Police Service, using the app in crime investigations. The RCMP denied using Clearview even after it had entered into a contract with the company — a detail exposed by Vancouver’s The Tyee.

With federal and provincial privacy commissioners ordering investigations, Clearview and the RCMP subsequently severed ties, although Citizen Lab noted that many other tech companies still sell facial recognition systems in Canada. “I think it is very questionable whether [Clearview] would conform with Canadian law,” Michael McEvoy, British Columbia’s privacy commissioner, told the Star in February.

There was fallout elsewhere. Four U.S. cities banned police use of facial recognition outright, the Citizen Lab report noted. The European Union in February proposed a ban on facial recognition in public spaces but later hedged. A U.K. court in April ruled that police facial recognition systems were “unlawful,” marking a significant reversal in surveillance-minded Britain. And the European Data Protection Board, an EU agency, informed Commission members in June that Clearview’s technology violates Pan-European law enforcement policies. As Rutgers University law professor and smart city scholar Ellen Goodman notes “There’s been a huge blowback” against the use of data-intensive policing technologies.

There’s nothing new about surveillance or police investigative practices that draw on highly diverse forms of electronic information, from wire taps to bank records and images captured by private security cameras. Yet during the past decade or so, dramatic advances in big data analytics, biometrics and AI, stoked by venture capital and law enforcement agencies eager to invest in new technology, have given rise to a fast-growing data policing industry. As the Clearview story showed, regulation and democratic oversight have lagged far behind the technology.

U.S. startups like PredPol and HunchLab, now owned by ShotSpotter, have designed so-called “predictive policing” algorithms that use law enforcement records and other geographical data (e.g. locations of schools) to make statistical guesses about the times and locations of future property crimes. Palantir’s law-enforcement service aggregates and then mines huge data sets consisting of emails, court documents, evidence repositories, gang member databases, automated licence plate readers, social media, etc., to find correlations or patterns that police can use to investigate suspects.

Yet as the Clearview fallout indicated, big data policing is rife with technical, ethical and political landmines, according to Andrew Ferguson, a University of the District Columbia law professor. As he explains in his 2017 book, The Rise of Big Data Policing, analysts have identified an impressive list: biased, incomplete or inaccurate data, opaque technology, erroneous predictions, lack of governance, public suspicions about surveillance and over-policing, conflicts over access to proprietary algorithms, unauthorized use of data and the muddied incentives of private firms selling law enforcement software.

At least one major study found that some police officers were highly skeptical of predictive policing algorithms. Other critics point out that by deploying smart city sensors or other data-enabled systems, like transit smart cards, local governments may be inadvertently providing the police with new intelligence sources. Metrolinx, for example, has released Presto card user information to police while London’s Metropolitan Police has made thousands of requests for Oyster card data to track criminals, according to The Guardian. “Any time you have a microphone, camera or a live-feed, these [become] surveillance devices with the simple addition of a court order,” says New York civil rights lawyer Albert Cahn, executive director of the Surveillance Technology Oversight Project (STOP).

The authors of the Citizen Lab study, lawyers Kate Robertson, Cynthia Khoo and Yolanda Song, argue that Canadian governments need to impose a moratorium on the deployment of algorithmic policing technology until the public policy and legal frameworks can catch up.

Data policing was born in New York City in the early 1990s when then-police Commissioner William Bratton launched “Compstat,” a computer system that compiled up-to-date crime information then visualized the findings in heat maps. These allowed unit commanders to deploy officers to neighbourhoods most likely to be experiencing crime problems.

Originally conceived as a management tool that would push a demoralized police force to make better use of limited resources, Compstat is credited by some as contributing to the marked reduction in crime rates in the Big Apple, although many other big cities experienced similar drops through the 1990s and early 2000s.

The 9/11 terrorist attacks sparked enormous investments in security technology. The past two decades have seen the emergence of a multi-billion-dollar industry dedicated to civilian security technology, everything from large-scale deployments of CCTVs and cybersecurity to the development of highly sensitive biometric devices — fingerprint readers, iris scanners, etc. — designed to bulk up the security around factories, infrastructure and government buildings.

Predictive policing and facial recognition technologies evolved on parallel tracks, both relying on increasingly sophisticated analytics techniques, artificial intelligence algorithms and ever deeper pools of digital data.

The core idea is that the algorithms — essentially formulas, such as decision-trees, that generate predictions — are “trained” on large tranches of data so they become increasingly accurate, for example at anticipating the likely locations of future property crimes or matching a face captured in a digital image from a CCTV to one in a large database of headshots. Some algorithms are designed to use a set of rules with variables (akin to following a recipe). Others, known as machine learning, are programmed to learn on their own (trial and error).

The risk lies in the quality of the data used to train the algorithms — what was dubbed the “garbage-in-garbage-out” problem in a study by the Georgetown Law Center on Privacy and Technology. If there are hidden biases in the training data — e.g., it contains mostly Caucasian faces — the algorithm may misread Asian or Black faces and generate “false positives,” a well-documented shortcoming if the application involves a identifying a suspect in a crime.

Similarly, if a poor or racialized area is subject to over-policing, there will likely be more crime reports, meaning the data from that neighbourhood is likely to reveal higher-than-average rates of certain types of criminal activity, a data point that would justify more over-policing and racial profiling. Some crimes are under-reported, and don’t influence these algorithms.

Other predictive and AI-based law enforcement technologies, such as “social network analysis” — an individual’s web of personal relationships, gleaned, for example, from social media platforms or examined by cross-referencing of lists of gang members — promised to generate predictions that individuals known to police were at risk of becoming embroiled in violent crimes.

This type of sleuthing seemed to hold out some promise. In one study, criminologists at Cardiff University found that “disorder-related” posts on Twitter reflected crime incidents in metropolitan London — a finding that suggests how big data can help map and anticipate criminal activity. In practise, however, such surveillance tactics can prove explosive. This happened in 2016, when U.S. civil liberties groups revealed documents showing that Geofeedia, a location-based data company, had contracts with numerous police departments to provide analytics based on social media posts to Twitter, Facebook, Instagram, etc. Among the individuals targeted by the company’s data: protestors and activists. Chastened, the social media firms rapidly blocked Geofeedia’s access.

In 2013, the Chicago Police Department began experimenting with predictive models that assigned risk scores for individuals based on criminal records or their connections to people involved in violent crime. By 2019, the CPD had assigned risk scores to almost 400,000 people, and claimed to be using the information to surveil and target “at-risk” individuals (including potential victims) or connect them to social services, according to a January 2020 report by Chicago’s inspector general.

These tools can draw incorrect or biased inferences in the same way that overreliance on police checks in racialized neighbourhoods results in what could be described as guilt by address. The Citizen Lab study noted that the Ontario Human Rights Commission identified social network analysis as a potential cause of racial profiling. In the case of the CPD’s predictive risk model, the system was discontinued in 2020 after media reports and internal investigations showed that people were added to the list based solely on arrest records, meaning they might not even have been charged, much less convicted of a crime.

Early applications of facial recognition software included passport security systems or searches of mug shot databases. But in 2011, the Insurance Corporation of B.C. offered Vancouver police the use of facial recognition software to match photos of Stanley Cup rioters with driver’s licence images — a move that prompted a stern warning from the province’s privacy commissioner. In 2019, the Washington Post revealed that FBI and Immigration and Customs Enforcement (ICE) investigators regarded state databases of digitized driver’s licences as a “gold mine for facial recognition photos” which had been scanned without consent.

In 2013, Canada’s federal privacy commissioner released a report on police use of facial recognition that anticipated the issues raised by Clearview app earlier in 2020. “[S]trict controls and increased transparency are needed to ensure that the use of facial recognition conforms with our privacy laws and our common sense of what is socially acceptable.” (Canada’s data privacy laws are only now being considered for an update.)

The technology, meanwhile, continues to gallop ahead. New York civil rights lawyer Albert Cahn points to the emergence of “gait recognition” systems, which use visual analysis to identify individuals by their walk; these systems are reportedly in use in China. “You’re trying to teach machines how to identify people who walk with the same gait,” he says. “Of course, a lot of this is completely untested.”

The predictive policing story evolved somewhat differently. The methodology grew out of analysis commissioned by the Los Angeles Police Department in the early 2010s. Two data scientists, Jeff Brantingham and George Mohler, used mathematical modelling to forecast copycat crimes based on data about the location and frequency of previous burglaries in three L.A. neighbourhoods. They published their results and soon set up PredPol to commercialize the technology. Media attention soon followed, as news stories played up the seemingly miraculous power of a Minority Report-like system that could do a decent job anticipating incidents of property crime.

Operationally, police forces used PredPol’s system by dividing up precincts in 150-square-metre “cells” that police officers were instructed to patrol more intensively during periods when PredPol’s algorithm forecast criminal activity. In the post-2009 credit crisis period, the technology seemed to promise that cash-strapped American municipalities would get more bang for their policing buck.

Other firms, from startups to multinationals like IBM, entered the market with innovations, for example, incorporating other types of data, such as socio-economic data or geographical features, from parks and picnic tables to schools and bars, that may be correlated to elevated incidents of certain types of crime. The reported crime data is routinely updated so the algorithm remains current.

Police departments across the U.S. and Europe have invested in various predictive policing tools, as have several in Canada, including Vancouver, Edmonton and Saskatoon. Whether they have made a difference is an open question. As with several other studies, a 2017 review by analysts with the Institute for International Research on Criminal Policy, at Ghent University in Belgium, found inconclusive results: some places showed improved results compared to more conventional policing, while in other cities, the use of predictive algorithms led to reduced policing costs, but little measurable difference in outcomes.

Revealingly, the city where predictive policing really took hold, Los Angeles, has rolled back police use on these techniques. Last spring, the LAPD tore up its contract with PredPol in the wake of mounting community and legal pressure from the Stop LAPD Spying Coalition, which found that individuals who posed no real threat, mostly Black or Latino, were ending up on police watch lists because of flaws in the way the system assigned risk scores.

“Algorithms have no place in policing,” Coalition founder Hamid Khan said in an interview this summer with MIT Technology Review. “I think it’s crucial that we understand that there are lives at stake. This language of location-based policing is by itself a proxy for racism. They’re not there to police potholes and trees. They are there to police people in the location. So location gets criminalized, people get criminalized, and it’s only a few seconds away before the gun comes out and somebody gets shot and killed.” (Similar advocacy campaigns, including proposed legislation governing surveillance technology and gang databases, have been proposed for New York City.)

There has been one other interesting consequence: police resistance. B.C.-born sociologist Sarah Brayne, an assistant professor at the University of Texas (Austin), spent two-and-a-half years embedded with the LAPD, exploring the reaction of law enforcement officials to algorithmic policing techniques by conducting ride-alongs as well as interviews with dozens of veteran cops and data analysts. In results published last year, Brayne and collaborator Angèle Christin observed “strong processes of resistance fuelled by fear of professional devaluation and threats of performance tracking.”

Before shifts, officers were told which grids to drive through, when and how frequently, and the locations of their vehicles were tracked by an on-board GPS devices to ensure compliance. But Brayne found that some would turn off the tracking device, which they regarded with suspicion. Others just didn’t buy what the technology was selling. “Patrol officers frequently asserted that they did not need an algorithm to tell them where crime occurs,” she noted.

In an interview, Brayne said that police departments increasingly see predictive technology as part of the tool kit, despite questions about effectiveness or other concerns, like racial profiling. “Once a particular technology is created,” she observed,” there’s a tendency to use it.” But Brayne added one other prediction, which has to do with the future of algorithmic policing in the post-George Floyd era — “an intersection,” as she says, “between squeezed budgets and this movement around defunding the police.”

The widening use of big data policing and digital surveillance poses, according to Citizen Lab’s analysis as well as critiques from U.S. and U.K. legal scholars, a range of civil rights questions, from privacy and freedom from discrimination to due process. Yet governments have been slow to acknowledge these consequences. Big Brother Watch, a British civil liberties group, notes that in the U.K., the national government’s stance has been that police decisions about the deployment of facial recognition systems are “operational.”

At the core of the debate is a basic public policy principle: transparency. Do individuals have the tools to understand and debate the workings of a suite of technologies that can have tremendous influence over their lives and freedoms? It’s what Andrew Ferguson and others refer to as the “black box” problem. The algorithms, designed by software engineers, rely on certain assumptions, methodologies and variables, none of which are visible, much less legible to anyone without advanced technical know-how. Many, moreover, are proprietary because they are sold to local governments by private companies. The upshot is that these kinds of algorithms have not been regulated by governments despite their use by public agencies.

New York City Council moved to tackle this question in May 2018 by establishing an “automated decision systems” task force to examine how municipal agencies and departments use AI and machine learning algorithms. The task force was to devise procedures for identifying hidden biases and to disclose how the algorithms generate choices so the public can assess their impact. The group included officials from the administration of Mayor Bill de Blasio, tech experts and civil liberties advocates. It held public meetings throughout 2019 and released a report that November. NYC was, by most accounts, the first city to have tackled this question, and the initiative was, initially, well received.

Going in, Cahn, the New York City civil rights lawyer, saw the task force as “a unique opportunity to examine how AI was operating in city government.” But he describes the outcome as “disheartening.” “There was an unwillingness to challenge the NYPD on its use of (automated decision systems).” Some other participants agreed, describing the effort as a waste.

If institutional obstacles thwarted an effort in a government the size of the City of New York, what does better and more effective oversight look like? A couple of answers have emerged.

In his book on big data policing, Andrew Ferguson writes that local governments should start at first principles, and urges police forces and civilian oversight bodies to address five fundamental questions, ideally in a public forum:

  • Can you identify the risks that your big data technology is trying to address?
  • Can you defend the inputs into the system (accuracy of data, soundness of methodology)?
  • Can you defend the outputs of the system (how they will impact policing practice and community relationships)?
  • Can you test the technology (offering accountability and some measure of transparency)?
  • Is police use of the technology respectful of the autonomy of the people it will impact?

These “foundational” questions, he writes, “must be satisfactorily answered before green-lighting any purchase or adopting a big data policing strategy.”

In addition to calling for a moratorium and a judicial inquiry into the uses of predictive policing and facial recognition systems, the authors of the Citizen Lab report made several other recommendations, including: the need for full transparency; provincial policies governing the procurement of such systems; limits on the use of ADS in public spaces; and the establishment of oversight bodies that include members of historically marginalized or victimized groups.

Interestingly, the federal government has made advances in this arena, which University of Ottawa law professor and privacy expert Teresa Scassa describes as “really interesting.”

The Treasury Board Secretariat in 2019 issued the “Directive on Automated Decision-Making,” which came into effect in April 2020, requires federal departments and agencies, except those involved in national security, to conduct “algorithmic impact assessments” (AIA) to evaluate unintended bias before procuring or approving the use of technologies that rely on AI or machine learning. The policy requires the government to publish AIAs, release software codes developed internally and continually monitor the performance of these systems. In the case of proprietary algorithms developed by private suppliers, federal officials have extensive rights to access and test the software.

In a forthcoming paper, Scassa points out that the directive includes due process rules and looks for evidence of whether systemic bias has become embedded in these technologies, which can happen if the algorithms are trained on skewed data. She also observes that not all algorithm-driven systems generate life-altering decisions, e.g., chatbots that are now commonly used in online application processes. But where they are deployed in “high impact” contexts such as policing, e.g., with algorithms that aim to identify individuals caught on surveillance videos, the policy requires “a human in the loop.”

The directive, says Scassa, “is getting interest elsewhere,” including the U.S. Ellen Goodman, at Rutgers, is hopeful this approach will gain traction with the Biden administration. In Canada, where provincial governments oversee law enforcement, Ottawa’s low-key but seemingly thorough regulation points to a way for citizens to shine a flashlight into the black box that is big data policing.

Source: From facial recognition, to predictive technologies, big data policing is rife with technical, ethical and political landmines

Let’s make 2021 the year we eliminate online hate in Canada

Of note, along with contesting Isreal’s non-vaccination of Palestinians, which is a legitimate criticism of the Israeli government, not “a demonstrably false accusation tantamount to a modern-day blood libel.” One can also question the further codification of the IHRA definition, given its sometimes being used more broadly than intended. The other specific recommendations, however, are reasonable:

2020 was challenging. In addition to the horror of disease, the pandemic brought other troubling developments, including a sharp rise in hatred disseminated online. Canadians are clearly immune neither to the pandemic nor to the growing hate it appears to be exacerbating.  

Online hate is not a new phenomenon. At my organization, CIJA, we have been working on the issue since 2013. But, like the coronavirus, online hate has exploited weaknesses in our society to the detriment of all. As our lives continue to migrate online, the very platforms that proved to be a lifeline in so many ways also served as a springboard for spreading vicious hatred.  

Asian Canadians have been wrongfully and absurdly accused of deliberately unleashing COVID-19. 

Indigenous people, subjected to hatred and mistreatment since generations before the invention of the internet, many living in conditions that should embarrass all Canadians, are experiencing vicious online attacks on their culture and identity.    

Muslims, women, and the LGBTQ2+ community are regularly targeted by haters online, where Islamophobia, misogyny and homophobia continue to flourish.   

Good old-fashioned racists seized the opportunity provided by a global discussion about anti-Black hatred to, paradoxically, spread anti-Black hatred.   

And, of course, Jews were accused of this conspiracy or that one, from creating COVID-19 to profiting from the pandemic to claiming that Israel has leveraged the pandemic to oppress Palestinians by denying them the vaccine – a demonstrably false accusation tantamount to a modern-day blood libel, and one that the Palestinians themselves have refuted.   

All deeply offensive, to be sure, but being offensive is the only causes for concern.  

If online hate were simply offensive, it would be easier to dismiss. However, CIJA and the many partners we have worked with over the years – including those who have recently joined us to form the Canadian Coalition to End Online Hate – have increasingly observed, online hate can, and too often does, turn into real-world violence.   

This. Must. Stop.   

The federal government should deliver on its commitments

Following the 2019 election, the Liberals committed to devising a national strategy to end online hate, an issue that was explicitly included in the Prime Minister’s mandate letters to the Ministers of Justice, Public Safety and Emergency Preparedness, Heritage and Diversity and Inclusion and Youth. 

They have a very good blueprint to work from: the June 2019 report on online hate produced by the House of Commons Standing Committee on Justice and Human Rights, then chaired by Montreal-area MP Anthony Housefather. The report followed the murders in Christchurch, Pittsburgh, and Poway, all cases of online hate morphing into real world violence. 

It is now time to take the next steps. We, and the groups we work with through the Canadian Coalition to End Online Hate, a broad-based alliance of close to 40 (and growing) organizations representing a diverse array of communities, are calling for the following concrete actions.  

We propose:   

  • Increasing resources for law enforcement, Crown attorneys, and judges to ensure they receive sufficient training on how to apply existing laws to deal with online hate 
  • Directing Statistics Canada to address the gap in data to help us determine the scope of the problem and monitor progress  
  • Ensuring we achieve balance between combating online hate and protecting freedom of expression, notably by formulating a definition of “hate” and “hatred” that is consistent with Supreme Court of Canada jurisprudence 
  • Creating a civil remedy to address online hate and  
  • Establishing strong and clear regulations for online platforms and Internet service providersabout how they monitor and transparently address incidents of hate spread on their platforms.   

The Role of Social Media Giants  

Platforms and providers do not have the best record when it comes to tracking and eliminating online hate. They must do better. And they will only do so with government pressure.  Canadian law must be strengthened to put the onus on platforms and providers to ensure that hateful content does not get published in their spaces. 

A national strategy to address online hate must include both the development of clear, harmonized, and uniform regulations, which apply to all platforms and providers operating in Canada, and an independent regulator to enforce them. 

These regulations should include a mandatory directive that providers incorporate appropriate definitions of hate and hatred. In the case of the Jewish community, we are advocating for the adoption of the International Holocaust Remembrance Alliance (IHRA) definition of antisemitism to be included in their user codes of conduct, algorithms, moderator policies, and terms of service.  

We also strongly believe that providers must make it easier for users to flag hateful content and be transparent about how complaints are adjudicated.  

COVID-19 has significantly accelerated our migration online, which was already well underway. It is imperative that we collectively do what is necessary to ensure the online space is a safe and hate-free place for everyone. 

Source: https://www.thestar.com/opinion/contributors/2021/01/11/lets-make-2021the-yearweeliminateonline-hatein-canada.html