Accenture: A digital transformation can make Canada’s immigration system world-class

Although by one of the companies likely vying for contracts under the various modernization initiatives, valid high level arguments. But the article is largely silent on the policy and program simplification and streamlining necessary to success of digital transformation, the harder aspect given the various stakeholder interests involved, both in and outside government.

Also less convinced of the need for “faster” policy development. Think better policy development and operational implementation is the greater issue.

Worked with Accenture and other consultants at Service Canada 2004-7 and was impressed with their competence and expertise and how they were able to provide a different and needed perspective to some of the issues we were dealing with:

A digitally empowered, efficient, strategic and fair immigration system will be essential for Canada to meet its ambitious immigration target of 1.2 million new residents between 2021 and 2023.

Immigration, Refugees and Citizenship Canada (IRCC) is well on its way to making that happen. It was one of the first federal departments to work with the Canadian Digital Service (CDS) and to express enthusiasm for digital transformation. The COVID pandemic was a catalyst to move faster and address backlogs while responding to new travel and entry requirements.

In recognition of the department’s ongoing work internally and with partner organizations such as the Canadian Border Services Agency (CBSA), IRCC has been recognized as a winner of the Canadian digital government community awards 2022, including excellence in innovation for its online citizenship test; excellence in open government for its digital application status trackers; and excellence in product management for its permanent residence digital intake portal.

However, ongoing travel and border restrictions and other global concerns have slowed momentum. The federal government is investing hundreds of millions to modernize IRCC’s IT infrastructure to ensure those targets can be met in the face of these new realities.

Finland, the United States and Australia have all had success modernizing their immigration systems. Canada could look to their example for inspiration.

Finland: Ten years ago, the Finnish Immigration Service was experiencing process inefficiencies and significant backlogs, inspiring the start of its transformation journey. The service decided to develop a modern case-management tool to meet its demands. The result was the end-to-end electronic immigration case-management system.

The system integrates every process within the immigration, citizenship and asylum workflow. It moves from digital electronic submission through processing and communication to electronic archiving. There are 15,000 potential users based in Finnish government locations and offices across the globe. Authorities consider this project a best-in-class immigration management system.

It was subsequently expanded through the implementation of “EnterFinland” – an online self-service portal, designed for both residence permits and citizenship cases.

EnterFinland is a testament to cross-government collaboration, with solutions that have introduced supplementary chatbots and artificial intelligence applications into workflows across departments. Importantly, many departments had to come on board with the new system for it to be successful.

The United States: United States Citizenship and Immigration Services (USCIS) has been on a mission to become fully digital. Several programs were put in place to achieve that goal: the end-user experience design (EUXD) program and “myUSCIS” program. The EUXD program puts application users at the centre of design efforts. Working with the community helped enhance user experience, define proper project requirements and increase user adoption and satisfaction.

The myUSCIS program transforms the immigration process with a digital portal and digitized forms for paperless processing. The driving goal was to allow users to track progress during their immigration journey.

A recurring theme of each digital transformation was understanding that it would take more than a single technology or going paperless. It required a business transformation and cultural shift within the organization.

Accordingly, in Canada, by framing digital transformation efforts in terms of people, process and policy, IRCC will optimize its own transformation efforts.

Australia: Australia has kept its annual immigration target intake steady at more than 160,000 per year for a number of years. A decade ago, it embarked on a modernization effort. The “seamless traveler” vision was created in response to an increase in citizenship and online visa applications, increased processing times, resource challenges and security threats.

What officials learned was that digitizing processes weren’t enough to achieve operational efficiencies. New processes needed to be intuitive, and human-centred to empower the workforce. In October 2020, Australia introduced its reusable permissions capability, a platform that provides consistent processing, approvals and decision-making for departments who issue visas, permits, accreditation, licences and registrations.

The Australian Department of Home Affairs streamlined processes at the border by digitizing existing Incoming passenger cards. This included collecting additional health-related declarations and passenger contact information to support the national COVID response and speed up processing times.

There are three keys to success in Canada – people, process and policy.

People

Starting with the experience of the end user is essential. Technology adoption must be about meeting the applicants’ needs from their vantage point – pivoting existing processes to user-centric digital experiences, and then adopting the latest technologies that can deliver on those goals.

When thinking about a world-class immigration system from the perspective of those wishing to become Canadian, a system must be fast and efficient with information that is timely, and each step of the process well thought-out. It should be easy to use, with services and processes that are intuitive and accessible, and it should be able to understand and accommodate the needs of the applicant. The processes should also be fair and transparent, so applicants know the status of their file as it progresses.

Also, the right stakeholder groups must be included. The countries that had the greatest successes valued co-ordination.

Grouping and classifying cases that are related – such as various categories relating to families – will mean that they can be processed more efficiently and will potentially address any biases. Government can then respond with digital systems that take into consideration variables including diversity, equity and inclusion, among others.

Process 

To realize a user-first vision, the government must fully embrace digital culture, tools and capabilities. This is no longer just an IT exercise. Every directorate and organization must become a digital organization for a workforce that is seamless in adopting new approaches and sharing information. As workforces become increasingly hybrid in nature, building the right digital culture and skills in the end-to-end organization will be essential.

Policy

Even if we fix technologies and create the best digital experience, none of that is useful unless the policy supports it. Given the transformative and disruptive nature of digital transformation, flexible policy is paramount to capture and respond to input from competing and changing priorities.

In Canada, we need to find a faster way to update policy. For instance, in the U.K., the government’s open innovation teamfollows a “policy at pace” style to actively engage citizen users.

Canada has a strong foundation and clear will to improve the ways it manages immigration and delivers user-centric digital experiences to newcomers as they navigate each step of their immigration journey. By considering lessons from around the globe, we can achieve a truly modern, innovative and world-class immigration system.

Source: A digital transformation can make Canada’s immigration system world-class

Accenture: Is artificial intelligence sexist?

An interesting look at the bias question of AI and some AI and related techniques to reduce bias. While written in terms of gender, the approach (use analytics, bias hunting algorithms, fairness software tools) could be deployed more widely:

Artificial intelligence (AI) is bringing amazing changes to the workplace, and it’s raising a perplexing question: Are those robots sexist?

While it may sound strange that AI could be gender-biased, there’s evidence that it’s happening when organizations aren’t taking the right steps.

In the age of #MeToo and the drive to achieve gender parity in the workplace, it’s critical to understand how and why this occurs and to continue to take steps to address the imbalance. At Accenture, a global professional services company, we have set a goal to have a gender-balanced work force by 2025. There is no shortage of examples that demonstrate how a diverse mindset leads to better results, from reports of crash test dummies that are modelled only on male bodies, to extensive academic studies on the performance improvements at firms with higher female representation. We know that diversity makes our business stronger and more innovative – and it is quite simply the right thing to do.

To make sure that AI is working to support this goal, it’s imperative to know how thought leaders, programmers and developers can use AI to fix the problem.

The issue matters because Canadian workplaces still suffer from gender inequality. Analysis by the Canadian Press earlier this year found that none of Canada’s TSX 60 companies listed a woman as its chief executive officer, and two-thirds did not include even one female among their top earners in their latest fiscal year.

Add to this the reports about behaviour in the workplace that undermines the principles of diversity and inclusion. Of course, AI isn’t the cause, but it can perpetuate the problem unless we focus on solutions. AI can contribute to biased behaviour because the knowledge that goes into its algorithm-based technology came from humans. AI “learns” to make decisions and solve complex problems, but the roots of its knowledge come from whatever we teach it.

There are lots of examples showing that what we put into AI can lead to bias:

  • A team of researchers at the University of Washington studied the top 100 Google image search results for 45 professions. Women were generally under-represented in the searches, as compared with representation data from the Bureau of Labor Statistics. The images of women were also frequently more risqué than how a female worker would actually show up for some jobs, such as construction. Finally, at the time, 27 per cent of American CEOs were women, but only 11 per cent of the Google image results for “CEO” were women (not including Barbie).
  • In a study by Microsoft’s Ece Kamar and Stanford University’s Himabindu Lakkaraju, the researchers acknowledged that the Google images system relies on training data, which could lead to blind spots. For instance, an AI algorithm could see photos of black dogs and white and brown cats – but when shown a photo of a white dog, it may mistake it for a cat.
  • An AI research scientist named Margaret Mitchell trained computers to have human-like reactions to sequences of images. A machine saw a house burning to bits. It described the view as “an amazing view” and “spectacular” – seeing only the contrast and bright colours, not the destruction. This came after the computer was shown a sequence of solely positive images, reflecting a limited viewpoint.
  • Late last year, media reported on Google Translate converting the names of occupations from Turkish, a gender-neutral language, to English. The translator-bots decided, among other things, that a doctor must be a “he,” while any nurse had to be “she.”

These examples come from biased training data, where one or more groups may be under-represented or not represented at all. It’s a problem that can exacerbate gender bias when AI is used for hiring and human resources. Statistical biases can also exist in areas including forecasting, reporting and selection.

The bias can come from inadequate labelling of the populations within the data for example, there were too few white dogs represented in the database of the machine looking at dogs and cats. Or it can come from machines working with variables that are highly co-related but rely too much on certain types of data; for example, weeding out job candidates because their address is from a women’s dorm on campus, without realizing it was keeping out female applicants.

Gender bias can also come from poor human judgment in what information goes into AI and its algorithms. For example, a job search algorithm may be told by its programmers to concentrate on graduates from certain programs in particular geographic locations, which happen to have few women enrolled.

Ironically, one of the best ways to fix AI gender bias involves deploying AI.

The first step is to use analytics to identify gender bias in AI. A Boston-based firm called Palatine Analytics ran an AI-based study looking at performance reviews at five companies. At first the study found that men and women were equally likely to meet their work goals. A deeper, AI-based analysis found that when men were reviewing other men, they gave them higher scores than they gave to women – which was leading to women getting promoted less frequently than men. Traditional analytics looked only at the scores, while the AI-based research helped analyze who was giving out the marks.

A second method to weed out gender bias is to develop algorithms that can hunt it down. Scientists at Boston University have been working with Microsoft on a concept called word embeddings – sets of data that serve as a kind of computer dictionary used by AI programs. They’ve combed through hundreds of billions of words from public data, keeping legitimate correlations (man is to king as woman is to queen) and altering ones that are biased (man is to computer programmer as woman is to homemaker), to create an unbiased public data set.

The third step is to design software that can root out bias in AI decision-making. Accenture has created an AI Fairness Tool, which looks for patterns in data that feed into its machines, and then tests and retests the algorithms to root out bias. This includes the subtle forms that humans might not see too easily to ensure people are being fairly tested. For example, one startup called Knockri uses video analytics and AI to screen job candidates; another, Textio, has a database of some 240 million job posts, to which it applies AI to root out biased terms.

AI and gender bias may seem like a problem, but it comes with its own solution. It’s our future – developing and deploying the technology properly can take us from #MeToo to a better hashtag: #GettingToEqual.