Why I boycotted Ottawa’s AI task force

Not sure how his boycott improves representation. Risks being “cutting off one’s nose to spite one’s face” rather than having a meaningful impact:

Our community deserves stronger representation at the table. Who better to help develop guardrails for racial bias in AI than those who have already felt its sting?

The Black community understands viscerally what is at stake when algorithms decide how long you spend in jail, whether you get a job interview, a loan or suffer a false arrest. Our lived experiences and expertise would only strengthen (not weaken) Canada’s AI strategy, making it more robust and more just for everyone.

Yet, the message from those in charge has been clear: they don’t really want us to participate in developing AI strategy.

That is why I decided to take a stand: As a Black scholar whose decade of research has identified the real harm AI poses to the Black community, and one who believes in the genuine participation of this community in addressing that harm, I could not in good conscience take any step directly or indirectly that would lend moral legitimacy to the current composition of Canada’s AI task force.

Therefore, I refrained from making any submission during its consultation process, which ended Oct. 31.

When Black voices are meaningfully included, I and others in the Black community will be happy to contribute.

Gideon Christian is an associate professor and university research chair in AI and law at the University of Calgary. His research focuses on racial bias in AI technologies.

Source: Why I boycotted Ottawa’s AI task force

And a letter from Liberal MP Greg Fergus, Boycotting the AI task force is counterproductive:

I was disappointed to see Gideon Christian’s recent Policy Options article “Why I boycotted Ottawa’s AI task force.”

I am a Member of Parliament. I hear from young people every day about their concerns regarding their place in the future of this country, and the incessant barriers they face in trying to forge their path in it. We all share an essential role in fighting and championing for our youth. We must strive to dismantle these barriers.

I am certain Professor Christian, based on his extensive career, has seen firsthand how the young, diverse, brilliant minds of our future make us stronger. They push us to innovate, to be better. We are building a world for them to inherit, one bolstered by technological growth. They deserve a seat at the table.

The appointment of a young Black scholar to the task force, regardless of the timing, gives her a valuable opportunity to contribute. I find it deeply unfortunate that Professor Christian would reduce her appointment to a symbolic gesture or optics, or that he would imply that she is lacking in qualification.

Rather than disputing her appointment, why would he not choose to act as a mentor instead? He chooses to boycott. This is not a choice I would make. I hope he will change his mind.

We need to be fighting for unity and co-operation where all are included, not tearing each other down. As an older Black Canadian, I am particularly pleased to see this emerging young Black leader access tables of influence.

I truly think we stand to gain by making places for the leaders of tomorrow. I believe we will soon see what can be accomplished by this taskforce and the great work done by young Canadians.

Together, we can build a future worthy of our youth.

Canada’s border agency plans to use AI to screen everyone entering the country — and single out ‘higher risk’ people

Inevitable given the numbers involved and the need to triage applications:

Canada Border Services Agency is planning to use AI to check everyone visiting or returning to the country to predict whether they are at risk of breaking the law. 

Anyone identified as “higher risk” could be singled out for further inspection.

The traveller compliance indicator (TCI), which has been tested at six land ports of entry, was developed using five years of CBSA travellers’ data. It assigns a “compliance score” for every person entering Canada. It will be used to enforce the Customs Act and related regulations.

The AI-assisted tool is expected to launch as early as 2027 and is meant to help border services officers at all land, air and marine ports of entry decide whether to refer travellers and the goods they are carrying for secondary examination, according to an assessment report obtained under an access to information request. 

“We use the obtained data to build predictive models in order to predict the likelihood of a traveller to be compliant,” said the report which was submitted by the border agency to the Treasury Board.

“TCI will improve the client experience by reducing processing time at the borders. The system will allow officers to spend less time on compliant travellers and reduce the number of unnecessary selective referrals.”

However, experts are alarmed by the lack of public engagement and input into the tool’s development. They worry that the system may reinforce human biases against certain types of travellers such as immigrants and visitors from certain countries because the quality of the analytics is only as good as what is inputted.

“If you’ve historically been very critical over a certain group, then that will be in the data and we’ll transfer that into the tool,” said Vancouver-based immigration lawyer Will Tao, who obtained the report.

“You look for the problems and you find problems where you’re looking, right?”

The government report said the border agency serves more than 96 million travellers a year, and trying to keep up with expected growth would require the addition of hundreds of border officers. In addition, physical limitations make it impossible to add extra booths at some points of entry.

The AI tool, the report said, will help keep border processing times at current levels even with an expected increase in the number of travellers. 

“No decisions are automated,” the report said. “Rather the current primary processing is being supported with a flag indicating whether a traveller’s information matches a compliance pattern.”

However, if an officer follows a mistaken recommendation from the tool, it could have impacts that could “last longer,” the report added.

“Once a risk score or indicator is presented to an officer, it can heavily influence their judgment, which in practice means the system is shaping outcomes even if the final authority is technically still human,” said University of Toronto professor Ebrahim Bagheri, who focuses on AI and the study of data and society.

“A false positive is when the system flags someone as risky or non-compliant even though they are in fact compliant. In the border context, that could mean a traveller is singled out for extra questioning or secondary examination even though they’ve done nothing wrong.”

The system is designed to display information of interest to an officer, such as a traveller’s means of transport and who accompanied them. 

It also captures “live determinants” which can include information such as whether the person is travelling alone, the type of identification they presented and the license plate of the vehicle they used, as well as any data from the traveller’s previous trips in CBSA’s records….

Source: Canada’s border agency plans to use AI to screen everyone entering the country — and single out ‘higher risk’ people

Globe editorial: Ottawa’s AI push must translate into savings [translation]

Other areas ripe for AI use are the overhead functions of HR and Finance:

…That is a good thing. Translators are no strangers to machines; they’ve been using computer tools for decades. But they have often warned that the programs are imperfect and nowhere near good enough to replace them. “At times, a ChatGPT translation will make sense,” Joachim Lépine, co-founder of LION Translation Academy in Sherbrooke, Que. wrote in a LinkedIn post this month. But “’sometimes useful’ is not good enough for high-stakes situations. Only humans have professional judgment. Period.”

However, new generative AI tools are rapidly improving in quality and are good enough to competently handle routine translations of mundane texts such as policy documents, press releases or memos. The more the programs learn from the language fed into them, the better they should become – although more critical documents such as laws and court rulings should continue to be handled by humans.

A centrepiece of the bureau’s rethink is its AI project, a program called PSPC Translate, which draws from the government’s data and language storehouse. It could serve as a bellwether for further government efficiencies and savings using AI. True success would be if the initiative translated into real savings and allowed government to slash the size of the bureau. 

Source: Ottawa’s AI push must translate into savings

The A.O.C. Deepfake Was Terrible. The Proposed Solution Is Delusional.

Brave New World and 1984 combined:

…The other crucial thing that the abundance of such easily generated information makes scarce is credibility. And that is nowhere more stark than in the case of photos, audio and video, because they are among the key mechanisms with which we judge claims about reality. Lose that, lose reality.

It would be nice if, like members of Congress or large media organizations, we all had a large staff who could be dispatched to disprove false claims and protect our reputations and in that small way buttress the sanctity of facts. Since we don’t, we need to find other models that we can all access. Scientists and parts of the tech industry have come up with a few very promising frameworks — known as zero-knowledge proofs, secure enclaves, hardware authentication tokens using public key cryptography, distributed ledgers, for example — about which there is much more to say at another moment. Many other tools may yet arise. But unless we start taking the need seriously now before we lose what’s left of proof of authenticity and verification, governments will step right into the void. If the governments are not run by authoritarians already, it probably won’t take long till they are.

Source: The A.O.C. Deepfake Was Terrible. The Proposed Solution Is Delusional.

    Turley-Ewart: Canada’s risk-averse businesses are slouching toward AI

    Arguably, the hardest issue to address:

    …Yet, the slow adoption of AI raises questions about Canadian businesses. What are they doing to invest in their own success? The inability of so many to effectively manage AI integration that will enable them to help themselves and improve productivity, economic growth and GDP per capita points to a culture of complacency.

    Canada’s aging digital infrastructure is a monument to that complacency. “Canada trails every other G7 nation in AI computing infrastructure, possessing only one-eighth to one-tenth of the available compute performance per capita compared to countries like the U.S.,” according to RBC. AI is the high-speed train that needs high speed tracks and engines. Canadian AI is running on 1960s era rails built for plodding diesel engines.

    What makes business AI-adoption rates so puzzling, as Minister Solomon hinted at in a recent interview, is that Canada is known for its “pioneering frontier AI research.” It is home to the “Godfather of AI,” and Nobel Laureate in Physics, University of Toronto’s Geoffrey Hinton. The country also has AI research organizations that do world-leading work: The Montreal Institute for Learning Algorithms, the Vector Institute in Toronto, as well as the Alberta Machine Intelligence Institute.

    That Canada is blessed with such rich AI research and innovation, and yet 88 per cent of our businesses have not even started to integrate AI into their operating models, speaks to a troubling lack of curiosity.

    We face a future where an inquisitive person writes a prompt in their AI tool of choice asking: Why didn’t Canadian businesses adopt AI sooner and prosper? 

    If we don’t change course the answer will be: “Risk aversion.” Most Canadian businesses lacked the courage to innovate.

    Source: Canada’s risk-averse businesses are slouching toward AI

    The Chatbot Culture Wars Are Here

    Here we go again with all the toxicity and partisanship, not too mention lack of ethics and courage:

    …Critics of this strategy call it “jawboning,” and it was the subject of a high-profile Supreme Court case last year. In that case, Murthy v. Missouri, it was Democrats who were accused of pressuring social media platforms like Facebook and Twitter to take down posts on topics such as the coronavirus vaccine and election fraud, and Republicans challenging their tactics as unconstitutional. (In a 6-to-3 decision, the court rejected the challenge, saying the plaintiffs lacked standing.)

    Now, the parties have switched sides. Republican officials, including several Trump administration officials I spoke to who were involved in the executive order, are arguing that pressuring A.I. companies through the federal procurement process is necessary to stop A.I. developers from putting their thumbs on the scale.

    Is that hypocritical? Sure. But recent history suggests that working the refs this way can be effective. Meta ended its longstanding fact-checking program this year, and YouTube changed its policies in 2023 to allow more election denial content. Critics of both changes viewed them as capitulation to right-wing critics.

    This time around, the critics cite examples of A.I. chatbots that seemingly refuse to praise Mr. Trump, even when prompted to do so, or Chinese-made chatbots that refuse to answer questions about the 1989 Tiananmen Square massacre. They believe developers are deliberately baking a left-wing worldview into their models, one that will be dangerously amplified as A.I. is integrated into fields like education and health care.

    There are a few problems with this argument, according to legal and tech policy experts I spoke to.

    The first, and most glaring, is that pressuring A.I. companies to change their chatbots’ outputs may violate the First Amendment. In recent cases like Moody v. NetChoice, the Supreme Court has upheld the rights of social media companies to enforce their own content moderation policies. And courts may reject the Trump administration’s argument that it is trying to enforce a neutral standard for government contractors, rather than interfering with protected speech.

    “What it seems like they’re doing is saying, ‘If you’re producing outputs we don’t like, that we call biased, we’re not going to give you federal funding that you would otherwise receive,’” Genevieve Lakier, a law professor at the University of Chicago, told me. “That seems like an unconstitutional act of jawboning.”

    There is also the problem of defining what, exactly, a “neutral” or “unbiased” A.I. system is. Today’s A.I. chatbots are complex, probability-based systems that are trained to make predictions, not give hard-coded answers. Two ChatGPT users may see wildly different responses to the same prompts, depending on variables like their chat histories and which versions of the model they’re using. And testing an A.I. system for bias isn’t as simple as feeding it a list of questions about politics and seeing how it responds.

    Samir Jain, a vice president of policy at the Center for Democracy and Technology, a nonprofit civil liberties group, said the Trump administration’s executive order would set “a really vague standard that’s going to be impossible for providers to meet.”

    There is also a technical problem with telling A.I. systems how to behave. Namely, they don’t always listen.

    Just ask Elon Musk. For years, Mr. Musk has been trying to create an A.I. chatbot, Grok, that embodies his vision of a rebellious, “anti-woke” truth seeker.

    But Grok’s behavior has been erratic and unpredictable. At times, it adopts an edgy, far-right personality, or spouts antisemitic language in response to user prompts. (For a brief period last week, it referred to itself as “Mecha-Hitler.”) At other times, it acts like a liberal — telling users, for example, that man-made climate change is real, or that the right is responsible for more political violence than the left.

    Recently, Mr. Musk has lamented that A.I. systems have a liberal bias that is “tough to remove, because there is so much woke content on the internet.”

    Nathan Lambert, a research scientist at the Allen Institute for AI, told me that “controlling the many subtle answers that an A.I. will give when pressed is a leading-edge technical problem, often governed in practice by messy interactions made between a few earlier decisions.”

    It’s not, in other words, as straightforward as telling an A.I. chatbot to be less woke. And while there are relatively simple tweaks that developers could make to their chatbots — such as changing the “model spec,” a set of instructions given to A.I. models about how they should act — there’s no guarantee that these changes will consistently produce the behavior conservatives want.

    But asking whether the Trump administration’s new rules can survive legal challenges, or whether A.I. developers can actually build chatbots that comply with them, may be beside the point. These campaigns are designed to intimidate. And faced with the potential loss of lucrative government contracts, A.I. companies, like their social media predecessors, may find it easier to give in than to fight.

    ”Even if the executive order violates the First Amendment, it may very well be the case that no one challenges it,” Ms. Lakier said. “I’m surprised by how easily these powerful companies have folded.”

    Source: The Chatbot Culture Wars Are Here

    Rempel Garner: For youth, AI is making immigration cuts even more urgent.

    Will be interesting to see if the annual levels plans makes any reference to expected impacts of AI. Valid concerns and need for further thinking about appropriate policy responses, shorter and longer-term:

    …So at writing, the only consensus on what skills will make someone employable in a five to ten year period, particularly in white collar jobs, are advanced critical thinking and problem solving ability acquired through decades of senior level managerial and product creation experience. So the question for anyone without those skills – read, youth – is, how can someone acquire those skills if AI is taking away entry level research and writing jobs? And how can they do that while competing with hundreds of thousands of non-permanent foreign workers?

    While many parts of that question may remain without clear answers (e.g. whether current public investments in existing modalities of education make sense), there are some that are much more obvious. Where Canadian employers do have a need for entry level labour, those jobs should not be filled by non-Canadians unless under extremely exceptional circumstances, so that Canadian youth can gain skills needed to survive in a labour market where they’re competing against AI for work.

    And translating that principle into action means that the Liberal government must (contrary to Coyne’s column) immediately and massively curtail the allowance of temporary foreign labour to continue to suppress Canadian wages and remove opportunity from Canadian youth. It’s clear that they haven’t given the topic much thought. Even their most recent Liberal platform only focused on reskilling mid-career workers, not the fact that AI will likely stymie new entrants to the labour market from ever getting to the mid-career point to begin with. While older Liberals may be assuming that the kids will be alright because they grew up with technology, data suggests AI will disrupt the labour market faster and more profoundly than even offshoring manufacturing did. Given that context, immediately weaning Canadian businesses off their over-reliance on cheap foreign labour seems like a no brainer.

    But on that front, Canada’s federal immigration policy, particularly its annual intake targets, fails to account for the anticipated labor market disruptions driven by artificial intelligence. This oversight may have arisen because many of those setting these targets have had the luxury of honing their skills over decades in an economic landscape where life was far more affordable than it is today. Or, because it’s easier to listen to the spin from lobbyists who argue that they have the right to cheap foreign labour than to the concerns of millions of jobless Canadian youth. Nevertheless, the strategy of allowing Canadian youth to languish in this hyper-rapidly evolving and disruptive job market, while admitting hundreds of thousands of temporary low-skilled workers and issuing work permits to an equal number of bogus asylum claimants, demands an urgent and profound rethink.

    Indifference to this issue, at best, will likely suppress wages and opportunities as the economy transitions to an AI integrated modality. At worst, it may bring widespread AI precipitated hyper-unemployment to an already unaffordable country, and all the negative social impacts associated with the same: debt, crime, and despair.

    So the Liberals can either immediately push their absurdly wide open immigration gates to a much more closed position while they grapple with this labour market disruption out on behalf of Canadians, or pray that Canadians forgive them for failing to do so.

    Source: For youth, AI is making immigration cuts even more urgent.

    Jobs survive, pay and purpose don’t: The quiet risk of workplace AI

    Interesting and a cause for further consideration of implications:

    …As sociologists of work, we see several reasons for concern, even if fears of immediate and widespread AI displacement are potentially overblown. Claims of a “white-collar bloodbath” and “job apocalypse” – that is, “something alarming happening to the job market” – certainly make for attention-grabbing headlines (and, at this stage of the purported advancements, they probably should).

    Erosion before displacement

    If predictions about future AI capabilities are even partly correct, we may be seeing only the early contours of what lies ahead. Already, signs are emerging that the conditions and perceived value of some white-collar work is shifting. At Amazon, software engineers report that AI tools are accelerating production timelines while reducing time for thoughtful coding and increasing output expectations. According to New York Times reporting, many now spend more time reviewing and debugging machine-generated code than writing their own. The work remains, but its character is changing – less autonomous, more pressure, and arguably less fulfilling.

    This shift in work quality may be creating broader economic ripples. Barclays economists have found that workers in AI-exposed roles are experiencing measurably slower wage growth – nearly three-quarters of a percentage point less per year for every 10-point increase in AI exposure. Employers may already be recalibrating the value of these positions, even as hiring continues.

    Uneven impacts

    Different forms of white-collar work may face vastly different futures under AI, depending on professional autonomy and control over the technology. Consider radiologists, initially seen as vulnerable given AI’s strength in image analysis. Yet, the profession has grown, with AI enabling faster analysis rather than replacement. Crucially, radiologists retain control. They make final diagnoses, communicate with patients and carry legal responsibility. Here, AI complements human expertise in what economists refer to as Jevons Paradox – where technological efficiency increases demand by making services cheaper and more accessible.

    Medical transcription offers a more cautionary tale. As AI speech-to-text tools improve, transcriptionists have shifted from producing reports to editing and error detection. In theory, this sounds like higher-skilled oversight work. In practice, it often means scanning AI output under time pressure and reduced job discretion. While jobs such as this one still exist, their perceived value is diminishing. The U.S. Bureau of Labor Statistics projects a 5 per cent employment decline between 2023 and 2033 – and given the rapid improvement in transcription models, that estimate may prove conservative.

    Adaptation isn’t necessarily promotion

    AI will undoubtedly create new roles and opportunities, particularly where human judgment remains essential. But we shouldn’t assume this future will preserve job quality. The story of retail banking offers a sobering lesson: automation first increased the number of teller jobs – but didn’t raise pay. Ultimately, tellers weren’t replaced by machines but by digital banking, shifting many to call centre jobs with less autonomy and lower wages. Even in the absence of widespread job displacement, AI may follow a similar pattern –reshaping many jobs in ways that reduce discretion, increase surveillance and erode its overall value.

    There remains considerable debate about how disruptive AI will be. But amid that uncertainty lies a risk of public complacency – or even disengagement from the issue. As Canadians, we need a sustained and open conversation about how these workplace changes are unfolding and where they might lead.

    Paul Glavin is an associate professor of sociology at McMaster University. Scott Schieman is professor of sociology and Canada Research Chair at the University of Toronto. Alexander Wilson is a graduate student in sociology at the University of Toronto.

    Source: Jobs survive, pay and purpose don’t: The quiet risk of workplace AI

    Colby Cosh: The lifelike nature of artificial intelligence

    Interesting test:

    …Well, fast-forward a dozen centuries, and along come Copernicus asking “What if Earth isn’t at the centre after all?”; Kepler asking “What if the orbits aren’t circular, but elliptical?”; and Newton, who got to the bottom of the whole thing by introducing the higher-level abstraction of gravitational force. Bye-bye epicycles.

    None of these intellectual steps, mind you, added anything to anyone’s practical ability to predict planetary motions. Copernicus’s model took generations to be accepted for this reason (along with the theological/metaphysical objections to the Earth not being at the centre of the universe): it wasn’t ostensibly as sophisticated or as powerful as the old reliable geocentric model. But you can’t get to Newton, who found that the planets and earthbound objects are governed by the same elegant and universal laws of motion, without Copernicus and Kepler.

    Which, in 2025, raises the question: could a computer do what Newton did? Vafa’s research group fed orbital data to AIs and found that they could correctly behave like ancient astronomers: make dependable extrapolations about the future movements of real planets, including the Earth. This raises the question whether the algorithms in question generate their successful orbital forecasts by somehow inferring the existence of Newtonian force-abstractions. We know that “false,” overfitted models and heuristics can work for practical purposes, but we would like AIs to be automated Newtons if we are going to live with them. We would like AIs to discover new laws and scientific principles of very high generality and robustness that we filthy meatbags haven’t noticed yet.

    When Vafa and his colleagues found is that the AIs remain in a comically pre-Copernican state. They can be trained to make accurate predictions by being presented with observational data, but it seems that they may do so on the basis of “wrong” implicit models, ones that depend on mystifying trigonometric clutter instead of the beautiful inverse-square force law that Newton gave us. The epicycles are back!

    The paper goes on to do more wombat-dissecting, using the game of Othello to show how AI reasoning can produce impressive results from (apparently) incomplete or broken underlying models. It is all very unlike the clean, rigorous “computing science” of the past 100 years: whatever you think of the prospects of AI, it is clear that the complexity of what we can create from code, or just buy off the shelf, is now approaching the complexity of biological life.

    Source: Colby Cosh: The lifelike nature of artificial intelligence

    AI Review of “The New Electoral Map and Diversity”

    Interesting to read an AI Review of my Hill Times article The New Electoral Map and Diversity. Reasonable take:

    ….Summary of the Work

    The manuscript offers a detailed examination of Canada’s reconfigured electoral map—now totalling 343 ridings—and its impact on the representation of immigrants, visible and religious minorities, and Indigenous peoples. It provides a side-by-side comparison of the 2013 and 2023 ridings using data on population percentages. Key insights focus on how demographic shifts, driven particularly by higher immigration rates, have yielded notable changes in suburban regions with an increase in ridings that have between 5% and 20% visible or religious minorities, while Indigenous representation shows a slight downward share of population in certain ridings.

    The author highlights how these shifts may manifest in future elections—particularly 2025—when a new cohort of naturalized citizens will become eligible voters. This could lead to both an increase in elected minority candidates and the need for political parties to navigate the interests and tensions of increasingly diverse ridings.


    Strengths

    1. Clarity of Data Presentation:
      • The manuscript uses clear tables and figures (e.g., Figures 1 through 7) to illustrate changes across various demographic groups, offering a robust breakdown of regional and group-based shifts.
    2. Timely Focus:
      • With a federal election looming and ongoing immigration trends, the study delivers an immediate perspective on how the new electoral map might influence representation and campaign strategies.
    3. Comprehensive Scope:
      • The analysis considers multiple forms of diversity—visible minorities, religious minorities, and Indigenous populations—providing a well-rounded view of demographic changes.
    4. Relevance for Policy and Electoral Strategy:
      • By noting the shifting landscapes of marginal ridings and growth areas, the manuscript offers insights valuable to policymakers, political parties, and electoral reform advocates.

    Areas for Further Development

    1. Methodological Transparency:
      • While the descriptive statistics are informative, clarifying the exact data sources (e.g., census data files and how the boundaries were mapped from 2013 to 2023) would bolster replicability. A brief discussion of how ridings were reallocated from old to new boundaries would further help readers understand the transition.
    2. Intersectional Analysis:
      • The manuscript could benefit from an intersectional lens, exploring how demographic factors (e.g., visible minority status combined with religious identification) may yield complex patterns. Currently, the study addresses religious and visible minorities separately; acknowledging overlaps may provide deeper insights.
    3. Electoral Outcomes and Voter Turnout:
      • Adding potential scenarios or brief references to how these demographic changes may affect turnout or party competition could supplement the descriptive findings. While this is outside the primary scope, a short reflection could give a sense of real-world impact.
    4. Indigenous Context:
      • Further elaboration on how growth in immigrant populations may affect Indigenous communities could strengthen the discussion. These points are alluded to, but additional depth on policy or community-level implications would clarify the evolving political dynamic.
    5. Longitudinal Comparisons:
      • If longer-term data (beyond the 2013 to 2023 cycles) are available, referencing previous realignments or older demographic snapshots could confirm whether the highlighted trends are accelerating, plateauing, or changing in nature.

    Style and Structure

    The manuscript is mostly well-organized, with clear headings and subheadings. However, some sections can be streamlined for readability:

    • Figures and text could be more closely integrated. For instance, highlighting main findings from a figure in the paragraph immediately following that figure helps the reader connect the visuals to the written analysis.
    • Where tables and figures convey similar data, consider whether both formats are necessary or whether they could be combined.


    Overall Impression

    This work significantly contributes to the understanding of how newly created ridings intersect with demographic changes, underscoring shifts for multiple minority groups and offering a snapshot of potential impacts ahead of the 2025 election. By expanding the methodological detail and further situating these findings within longer-term trends or intersectional dimensions of representation, the manuscript would offer an even more robust resource for analysts, political parties, and community stakeholders interested in the evolving face of Canadian democracy.

    Source: AI Review of “The New Electoral Map and Diversity”