Data science education lacks a much-needed focus on ethics

Of note:

Undergraduate training for data scientists – dubbed the sexiest job of the 21st century by Harvard Business Review – falls short in preparing students for the ethical use of data science, our new study found.

Data science lies at the nexus of statistics and computer science applied to a particular field such as astronomy, linguistics, medicine, psychology or sociology. The idea behind this data crunching is to use big data to address otherwise unsolvable problems, such as how health care providers can create personalized medicine based on a patient’s genes and how businesses can make purchase predictions based on customers’ behavior

The U.S. Bureau of Labor Statistics projects a 15% growth in data science careers over the period of 2019-2029, corresponding with an increased demand for data science training. Universities and colleges have responded to the demand by creating new programs or revamping existing ones. The number of undergraduate data science programs in the U.S. jumped from 13 in 2014 to at least 50 as of September 2020. 

As educators and practitioners in data science, we were prompted by the growth in programs to investigate what is covered, and what is not covered, in data science undergraduate education.

In our study, we compared undergraduate data science curricula with the expectations for undergraduate data science training put forth by the National Academies of Sciences, Engineering and Medicine. Those expectations include training in ethics. We found most programs dedicated considerable coursework to mathematics, statistics and computer science, but little training in ethical considerations such as privacy and systemic bias. Only 50% of the degree programs we investigated required any coursework in ethics.

Why it matters

As with any powerful tool, the responsible application of data science requires training in how to use data science and to understand its impacts. Our results align with prior work that found little attention is paid to ethics in data science degree programs. This suggests that undergraduate data science degree programs may produce a workforce without the training and judgment to apply data science methods responsibly. This primer on data science ethics covers real-world harms.

It isn’t hard to find examples of irresponsible use of data science. For instance, policing models that have a built-in data bias can lead to an elevated police presence in historically over-policed neighborhoods. In another example, algorithms used by the U.S. health care system are biased in a way that causes Black patients to receive less care than white patients with similar needs. 

We believe explicit training in ethical practices would better prepare a socially responsible data science workforce.

What still isn’t known

While data science is a relatively new field – still being defined as a discipline – guidelines exist for training undergraduate students in data science. These guidelines prompt the question: How much training can we expect in an undergraduate degree? 

The National Academies recommend training in 10 areas, including ethical problem solving, communication and data management.

Our work focused on undergraduate data science degrees at schools classified as R1, meaning they engage in high levels of research activity. Further research could examine the amount of training and preparation in various aspects of data science at the Masters and Ph.D. levels and the nature of undergraduate data science training at schools of different research levels.

Given that many data science programs are new, there is considerable opportunity to compare the training that students receive with the expectations of employers. 

What’s next

We plan to expand on our findings by investigating the pressures that might be driving curriculum development for degrees in other disciplines that are seeing similar job market growth.

Source: https://theconversation.com/data-science-education-lacks-a-much-needed-focus-on-ethics-164372

McParland: Renaming Ryerson University to appease the delicate is probably harmless, if pointless

Valid critique of single-minded blinkers:

The only reason I knew anything about Egerton Ryerson, before he ran afoul of the forces of statue reclamation, was because, for a brief period, I attended the Toronto school that took his name.

That was a long time ago. Ryerson was a mere polytechincal institute at the time and no one cared much who it was named after. Given I was to spend time there, I checked out the man whose name was on the building. Turned out he was a key figure in the staid, grey, ultra-respectable clique that ran the Toronto in the early and middle decades of the 19th century. Most of them were rigid, unbending figures, steeped in their self-regard, but Ryerson was an education maven: arguing that education should be mandatory, schools should be free, teachers should be professionally trained, textbooks should include Canadian authors, schools should be run independently and freed of the monopolistic hands of the priests. For that he won wide plaudits and remained a respected and admired figure well into the current century, until history was suddenly revised and he became a reviled character accused of plotting to demean and degrade Canada’s Indigenous people.

His sin was that, approached for advice on a means of educating Aboriginal children, he advocated for teaching in English in boarding schools away from families. While he could hardly be blamed for the horror show the system later became, his presence at the birth of the concept has seen him seized on by revisionist extremists intent on denouncing the dead for failing to adopt 21st century processes in a 19th century world.

The old-timey Ryerson Polytechnical Institute I attended has since grown considerably, sprawling over a network of streets and byways all over central Toronto and proudly re-branding itself as a fully-fledged university. Now it is to have a new name, because any association with Egerton Ryerson is a wholly unsatisfactory state of affairs for the ultra-woke, easily offended young people who make up the student body or the timid functionaries who populate the administration.

The decision was announced Thursday after approval by the university’s board of governors, based on the recommendations of a report commissioned last November. In addition to designating Ryerson an unperson, the board agreed the university “will not reinstall, restore or replace” a statue that had been pulled down and disfigured, and will issue “an open call for proposals for the rehoming of the remaining pieces … to promote educational initiatives.” Anyone looking for an extra kneecap or a spare left hand as a conversation piece or garden ornament should presumably apply at the bursar’s office.

Ceremonies to promote “healing and closure” will be held at the spot the statue once occupied. Board members agreed something will also have to be done about “Eggy,” a school mascot that will obviously no longer do unless the faculty redirects its interests towards the reproductive habits of chickens.

If a new name makes the delicate daisies at Ryerson happy it seems kind of harmless. And maybe it’s just as well. Parts of the university border on Dundas Street, a main thoroughfare christened after another long-dead figure who got himself mixed up with the wrong side of history. Since the city had already decided to rename the offending stretches of pavement, the university was going to have to order up new letterhead anyway, so why not go for the full magillah? Next on the list could be Yonge St., which also skirts the campus and honours a figure far more objectionable than either Dundas or Ryerson, but who has somehow escaped the roving hordes of Puritans now dictating the acceptable limits of nomenclature to a crushed and cowering city. By this time next year whole swaths of the city core could find itself operating under new identities, confusing the tourists and playing havoc with street maps.

It’s possible trouble still lies ahead, however. Among findings in the task force report was a potentially troubling recommendation that some recognition of Ryerson’s existence be allowed to continue. Specifically, “the establishment of a physical and interactive display that provides comprehensive and accessible information about the legacy of Egerton Ryerson and the period in which he was commemorated by the university,” and  “the creation of a website that disseminates the Task Force’s historical research findings about Egerton Ryerson’s life and legacy.”

Given that the man was hardly the ogre imagined by his statue-bashing accusers, and bears much credit for the early development of an advanced education system in what was then a remote and underpopulated province, it’s possible an honest assessment of his life won’t be as dark and discreditable as today’s student body obviously hopes.

What happens then? Will they tear down the display and banish the web site? Probably. Truth can never be allowed to spoil the prejudices of historical ignorance. Especially at an institution of higher education.

Source: https://ottawacitizen.com/opinion/kelly-mcparland-renaming-ryerson-university-to-appease-the-delicate-is-probably-harmless-if-pointless/wcm/9cdcaa08-96aa-44ea-b856-6e13345e8373

Picard: The troubling Nazi-fication of COVID-19 discourse

Good commentary:

If you spend any amount of time on social media engaging about COVID-19, you will know discussions tend to get personal and ugly pretty fast.

Encourage vaccination of young people, and you’re labelled a pedophile.

Support masking in indoor settings? You’re a goose-stepping fascist.

Laud vaccination as a way out of the pandemic, and you are Joseph Goebbels and should brace yourself to be on trial for crimes against humanity at the fictional Nuremberg 2 tribunal.

Acknowledge that lockdowns are sometimes necessary to control the spread of a pandemic virus, and brace yourself for the onslaught of Hitler images.

These types of responses are predictable to a certain degree.

Godwin’s law (coined by U.S. lawyer Mike Godwin in 1990) holds that as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler becomes more likely.

These days, debates go from zero to Hitler in about a nanosecond.

Some may want to dismiss this kind of over-the-top rhetoric as laughable, the work of a tiny minority of extremists and their bots.

But it’s obscene, and obscenely commonplace.

The Nazi-fication of public discourse is no longer the sole purview of pathetic man-boys holed up in their basements.

Enabled by social-media giants hiding behind freedom-of-speech arguments, trolls can now spread their misogynist, racist and anti-social views readily and mercilessly.

The goal here is to muddy the waters between fact and fiction, between truth and lies, and to undermine democratic institutions.

The grunts of a few can be turned into shouts that unfortunately have a growing audience, especially among the disgruntled and disenfranchised.

Playing the victim card appeals to them.

The ragtag collection of conspiracy theorists who gather at anti-mask, anti-vaccine, anti-lockdown rallies is fascinating – a stinking potpourri of grievances, with denunciations of everything from vaccines to “fake news,” to 5G, to the so-called “deep state.”

These rallies – which are getting bigger as pandemic frustrations grow – have more than their fair share of Hitler talk and imagery. They also include people wearing the yellow Star of David, implying that being told to wear a mask or get a jab is a level of persecution comparable to Jews who were rounded up and shipped in cattle cars to death camps.

Clearly some people have lost the plot.

Yet, they are being encouraged by politicians who embrace rhetoric suggesting that a position is invalid because the same view was held by Hitler.

A case in point is odious Ontario MPP Randy Hillier, who claims that lockdowns, mask rules and vaccine mandates are forms of Nazi-like tyranny.

His perverse version of freedom holds that individual rights are absolute, and that, for example, unvaccinated people have a God-given right to do as they please up to and including infecting others with the coronavirus.

Mr. Hillier and his acolytes have made a habit of casually tossing around Nazi analogies and Hitler images.

This mainstreaming of hateful images and thinly veiled hate speech should alarm us on a number of levels.

First of all, it betrays a profound ignorance of the Holocaust.

There can be no comparisons made between the state-sponsored mass murder of six million people and the temporary shutdown of the local mall.

Those who have the unmitigated gall to wear yellow stars to anti-mask rallies offend the memory of the victims of the Shoah and their descendants.

It is worth noting that Mr. Godwin, when he fashioned his adage, actually wanted people to think harder about the Holocaust and why Nazi comparisons should not be casually tossed into conversation.

Thinking is certainly not what’s happening here.

What we’re seeing is a lot of projection, the psychological impulse to project on other people what you’re actually feeling.

Former U.S. president Donald Trump, sometimes called the “Projection President,” was the embodiment of this phenomenon.

Mr. Trump, a chronic liar who wallowed in corruption, routinely attacked his opponents as corrupt liars. He also frequently described his opponents in a derogatory fashion, a lynchpin strategy of hate-mongers, and now a mainstay of social media.

Next time you hear the claims of Nazi-like tyranny and oppression, think about what is really being said.

Those who don’t want masks under any circumstances – those who not only want to refuse vaccines but prevent others from getting them – are actually the tyrants.

Their use of Hitler images and analogies are not a caution, but an embrace, one we should call out, not dismiss casually.

Source: https://www.theglobeandmail.com/opinion/article-the-troubling-nazi-fication-of-covid-19-discourse/

Stephens: What Should Conservatives Conserve?

Of interest and relevance even if the conclusion is likely over-optimistic:

In 1990, V.S. Naipaul delivered a celebrated lecture on the subject of “Our Universal Civilization.” The Berlin Wall had fallen, liberal democracy was ascendant, and Naipaul wanted to reflect on what the universal civilization — by which he meant the West — meant for someone like him, a Hindu son of colonial Trinidad who had made his way “from the periphery to the center” to become one of the great novelists of his time.

Naipaul intended his lecture as a celebration of the West. But he sensed an undercurrent of disquiet, which he found expressed in Nahid Rachlin’s 1978 novel, “Foreigner.” The book is about an Iranian woman who works in Boston as a biologist and seems well assimilated to American life. But on a return visit to Tehran she loses her mental balance and falls ill. The cure, it turns out, is religion.

“We can see that the young woman was not prepared for the movement between civilizations,” Naipaul observed, “the movement out of the shut-in Iranian world, where the faith was the complete way, filled everything, left no spare corner of the mind or will or soul, to the other world, where it was necessary to be an individual and responsible.”

I’ve been thinking of Naipaul and Rachlin while reading Sohrab Ahmari’s new book, “The Unbroken Thread.” Ahmari, now the op-ed editor of The New York Post, is a friend and former colleague with whom I’ve had a political falling out. About three years ago, he made an abrupt switch from being a NeverTrump conservative, railing against the new illiberalism, to being something of a new illiberal himself, railing against “nice” conservatives who, he believes, fail to appreciate that rights-based liberalism is a sucker’s game that only the left can win.

Ahmari’s elegantly written book matters because it seeks to give moral voice to what so far has mainly been a populist scream against the values of elite liberalism, above all its disdain for limits, from moral taboos to national borders to religious rituals. His device is a series of capsule biographies of important thinkers — Confucius, Seneca, Augustine, C.S. Lewis, Abraham Joshua Heschel and Andrea Dworkin, among others — who led richer lives by observing and celebrating the limits.

There’s much to admire here, particularly in the fact that many of Ahmari’s exemplars chose the lives they did, swimming against the current of their times.

The same might be said of Ahmari himself, an immigrant from Iran who arrived in America in impoverished circumstances, rose swiftly up the ranks of conservative intelligentsia, bounced between Seattle, Boston, London and New York, converted to Catholicism and switched from neoconservatism to paleoconservatism — all by his mid-30s.

It’s a trajectory that resembles Naipaul’s. But Ahmari has a political purpose at odds with the personal one. He’s grown disenchanted with the society that has provided him with such a bounty of choice.

He frets that his son will grow up to become a member of a ruthlessly meritocratic but spiritually vacuous Western elite. He mourns North Dakota’s decision to abandon its blue law against doing business on Sundays. He laments that the “American order enshrines very few substantive ideals I would want to transmit to my son.”

In short, Ahmari, rather like the protagonist in Rachlin’s novel, thinks it would be better to put some limits on choice, not just for himself but for others as well.

There’s a charge of hypocrisy to be made here, to which Ahmari partially owns up. What he doesn’t mention is that his admiration for the unflinching high-mindedness of a Heschel or an Aquinas somehow didn’t stop him from becoming a late but enthusiastic convert to the cult of Donald Trump — that is, of the hedonistic bully.

But the larger charge against Ahmari’s book is its failure of moral and political imagination. Choice is no enemy of morality. It’s a precondition for it. It’s why, theologically speaking, temptation must exist. It’s why America, for all of its flaws, tends toward a certain kind of easygoing decency. It’s also why virtue-obsessed countries like Iran and Saudi Arabia tend to be so publicly brutal and so privately corrupt.

Ahmari’s larger falsehood is that the American order transmits few substantive ideals. “This idea of the pursuit of happiness is at the heart of the attractiveness of the civilization to so many outside it or on its periphery,” Naipaul said in that speech.

“So much is contained in it: the idea of the individual, responsibility, choice, the life of the intellect, the idea of vocation and perfectibility and achievement. It is an immense human idea. It cannot be reduced to a fixed system. It cannot generate fanaticism.”

Today, what remains of conservative intelligentsia is split. On one side are those who think that what conservatism should revert to is a kind of anti-liberalism, in the reactionary 19th-century European tradition. On the other, there are those who believe that the purpose of American conservatism is to conserve the substantive principles of 1776 — that is, of the open mind and the ever more open society.

Naipaul could have set Ahmari straight: The universal civilization “is known to exist, and because of that, other more rigid systems in the end blow away.”

Source: https://www.nytimes.com/2021/08/03/opinion/what-should-conservatives-conserve.html?action=click&module=Opinion&pgtype=Homepage

Why A.I. Should Be Afraid of Us: Because benevolent bots are suckers.

Of significance as AI becomes more prevalent. “Road rage” as the new Turing test!

Artificial intelligence is gradually catching up to ours. A.I. algorithms can now consistently beat us at chesspoker and multiplayer video games, generate images of human faces indistinguishable from real oneswrite news articles (not this one!) and even love stories, and drive cars better than most teenagers do.

But A.I. isn’t perfect, yet, if Woebot is any indicator. Woebot, as Karen Brown wrote this week in Science Times, is an A.I.-powered smartphone app that aims to provide low-cost counseling, using dialogue to guide users through the basic techniques of cognitive-behavioral therapy. But many psychologists doubt whether an A.I. algorithm can ever express the kind of empathy required to make interpersonal therapy work.

“These apps really shortchange the essential ingredient that — mounds of evidence show — is what helps in therapy, which is the therapeutic relationship,” Linda Michaels, a Chicago-based therapist who is co-chair of the Psychotherapy Action Network, a professional group, told The Times.

Empathy, of course, is a two-way street, and we humans don’t exhibit a whole lot more of it for bots than bots do for us. Numerous studies have found that when people are placed in a situation where they can cooperate with a benevolent A.I., they are less likely to do so than if the bot were an actual person.

“There seems to be something missing regarding reciprocity,” Ophelia Deroy, a philosopher at Ludwig Maximilian University, in Munich, told me. “We basically would treat a perfect stranger better than A.I.”

In a recent study, Dr. Deroy and her neuroscientist colleagues set out to understand why that is. The researchers paired human subjects with unseen partners, sometimes human and sometimes A.I.; each pair then played a series of classic economic games — Trust, Prisoner’s Dilemma, Chicken and Stag Hunt, as well as one they created called Reciprocity — designed to gauge and reward cooperativeness.

Our lack of reciprocity toward A.I. is commonly assumed to reflect a lack of trust. It’s hyper-rational and unfeeling, after all, surely just out for itself, unlikely to cooperate, so why should we? Dr. Deroy and her colleagues reached a different and perhaps less comforting conclusion. Their study found that people were less likely to cooperate with a bot even when the bot was keen to cooperate. It’s not that we don’t trust the bot, it’s that we do: The bot is guaranteed benevolent, a capital-S sucker, so we exploit it.

That conclusion was borne out by conversations afterward with the study’s participants. “Not only did they tend to not reciprocate the cooperative intentions of the artificial agents,” Dr. Deroy said, “but when they basically betrayed the trust of the bot, they didn’t report guilt, whereas with humans they did.” She added, “You can just ignore the bot and there is no feeling that you have broken any mutual obligation.”

This could have real-world implications. When we think about A.I., we tend to think about the Alexas and Siris of our future world, with whom we might form some sort of faux-intimate relationship. But most of our interactions will be one-time, often wordless encounters. Imagine driving on the highway, and a car wants to merge in front of you. If you notice that the car is driverless, you’ll be far less likely to let it in. And if the A.I. doesn’t account for your bad behavior, an accident could ensue.

“What sustains cooperation in society at any scale is the establishment of certain norms,” Dr. Deroy said. “The social function of guilt is exactly to make people follow social norms that lead them to make compromises, to cooperate with others. And we have not evolved to have social or moral norms for non-sentient creatures and bots.”

That, of course, is half the premise of “Westworld.” (To my surprise Dr. Deroy had not heard of the HBO series.) But a landscape free of guilt could have consequences, she noted: “We are creatures of habit. So what guarantees that the behavior that gets repeated, and where you show less politeness, less moral obligation, less cooperativeness, will not color and contaminate the rest of your behavior when you interact with another human?”

There are similar consequences for A.I., too. “If people treat them badly, they’re programed to learn from what they experience,” she said. “An A.I. that was put on the road and programmed to be benevolent should start to be not that kind to humans, because otherwise it will be stuck in traffic forever.” (That’s the other half of the premise of “Westworld,” basically.)

There we have it: The true Turing test is road rage. When a self-driving car starts honking wildly from behind because you cut it off, you’ll know that humanity has reached the pinnacle of achievement. By then, hopefully, A.I therapy will be sophisticated enough to help driverless cars solve their anger-management issues.

Bias Is a Big Problem. But So Is ‘Noise.’

Useful discussion in the context of human and AI decision-making. AI provides greater consistency (less noise or variability than humans) but with the risk of bias being part of the algorithms, and the importance of distinguishing the two when assessing decision-making:

The word “bias” commonly appears in conversations about mistaken judgments and unfortunate decisions. We use it when there is discrimination, for instance against women or in favor of Ivy League graduates. But the meaning of the word is broader: A bias is any predictable error that inclines your judgment in a particular direction. For instance, we speak of bias when forecasts of sales are consistently optimistic or investment decisions overly cautious.

Society has devoted a lot of attention to the problem of bias — and rightly so. But when it comes to mistaken judgments and unfortunate decisions, there is another type of error that attracts far less attention: noise.

To see the difference between bias and noise, consider your bathroom scale. If on average the readings it gives are too high (or too low), the scale is biased. If it shows different readings when you step on it several times in quick succession, the scale is noisy. (Cheap scales are likely to be both biased and noisy.) While bias is the average of errors, noise is their variability.

Although it is often ignored, noise is a large source of malfunction in society. In a 1981 study, for example, 208 federal judges were asked to determine the appropriate sentences for the same 16 cases. The cases were described by the characteristics of the offense (robbery or fraud, violent or not) and of the defendant (young or old, repeat or first-time offender, accomplice or principal). You might have expected judges to agree closely about such vignettes, which were stripped of distracting details and contained only relevant information.

But the judges did not agree. The average difference between the sentences that two randomly chosen judges gave for the same crime was more than 3.5 years. Considering that the mean sentence was seven years, that was a disconcerting amount of noise.

Noise in real courtrooms is surely only worse, as actual cases are more complex and difficult to judge than stylized vignettes. It is hard to escape the conclusion that sentencing is in part a lottery, because the punishment can vary by many years depending on which judge is assigned to the case and on the judge’s state of mind on that day. The judicial system is unacceptably noisy.

Consider another noisy system, this time in the private sector. In 2015, we conducted a study of underwriters in a large insurance company. Forty-eight underwriters were shown realistic summaries of risks to which they assigned premiums, just as they did in their jobs.

How much of a difference would you expect to find between the premium values that two competent underwriters assigned to the same risk? Executives in the insurance company said they expected about a 10 percent difference. But the typical difference we found between two underwriters was an astonishing 55 percent of their average premium — more than five times as large as the executives had expected.

Many other studies demonstrate noise in professional judgments. Radiologists disagree on their readings of images and cardiologists on their surgery decisions. Forecasts of economic outcomes are notoriously noisy. Sometimes fingerprint experts disagree about whether there is a “match.” Wherever there is judgment, there is noise — and more of it than you think.

Noise causes error, as does bias, but the two kinds of error are separate and independent. A company’s hiring decisions could be unbiased overall if some of its recruiters favor men and others favor women. However, its hiring decisions would be noisy, and the company would make many bad choices. Likewise, if one insurance policy is overpriced and another is underpriced by the same amount, the company is making two mistakes, even though there is no overall bias.

Where does noise come from? There is much evidence that irrelevant circumstances can affect judgments. In the case of criminal sentencing, for instance, a judge’s mood, fatigue and even the weather can all have modest but detectable effects on judicial decisions.

Another source of noise is that people can have different general tendencies. Judges often vary in the severity of the sentences they mete out: There are “hanging” judges and lenient ones.

A third source of noise is less intuitive, although it is usually the largest: People can have not only different general tendencies (say, whether they are harsh or lenient) but also different patterns of assessment (say, which types of cases they believe merit being harsh or lenient about). Underwriters differ in their views of what is risky, and doctors in their views of which ailments require treatment. We celebrate the uniqueness of individuals, but we tend to forget that, when we expect consistency, uniqueness becomes a liability.

Once you become aware of noise, you can look for ways to reduce it. For instance, independent judgments from a number of people can be averaged (a frequent practice in forecasting). Guidelines, such as those often used in medicine, can help professionals reach better and more uniform decisions. As studies of hiring practices have consistently shown, imposing structure and discipline in interviews and other forms of assessment tends to improve judgments of job candidates.

No noise-reduction techniques will be deployed, however, if we do not first recognize the existence of noise. Noise is too often neglected. But it is a serious issue that results in frequent error and rampant injustice. Organizations and institutions, public and private, will make better decisions if they take noise seriously.

Daniel Kahneman is an emeritus professor of psychology at Princeton and a recipient of the 2002 Nobel Memorial Prize in Economic Sciences. Olivier Sibony is a professor of strategy at the HEC Paris business school. Cass R. Sunstein is a law professor at Harvard. They are the authors of the forthcoming book “Noise: A Flaw in Human Judgment,” on which this essay is based.

Source: https://www.nytimes.com/2021/05/15/opinion/noise-bias-kahneman.html?action=click&module=Opinion&pgtype=Homepage

Who Is Making Sure the A.I. Machines Aren’t Racist?

Good overview of the issues and debates (Google’s earlier slogan of “do no evil” seems so quaint):

Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.

The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines.

But it is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.

In the nearly 10 years I’ve written about artificial intelligence, two things have remained a constant: The technology relentlessly improves in fits and sudden, great leaps forward. And bias is a thread that subtly weaves through that work in a way that tech companies are reluctant to acknowledge.

On her first night home in Menlo Park, Calif., after the Barcelona conference, sitting cross-​legged on the couch with her laptop, Dr. Gebru described the A.I. work force conundrum in a Facebook post.

“I’m not worried about machines taking over the world. I’m worried about groupthink, insularity and arrogance in the A.I. community — especially with the current hype and demand for people in the field,” she wrote. “The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”

The A.I. community buzzed about the mini-manifesto. Soon after, Dr. Gebru helped create a new organization, Black in A.I. After finishing her Ph.D., she was hired by Google.

She teamed with Margaret Mitchell, who was building a group inside Google dedicated to “ethical A.I.” Dr. Mitchell had previously worked in the research lab at Microsoft. She had grabbed attention when she told Bloomberg News in 2016 that A.I. suffered from a “sea of dudes” problem. She estimated that she had worked with hundreds of men over the previous five years and about 10 women.

Their work was hailed as groundbreaking. The nascent A.I. industry, it had become clear, needed minders and people with different perspectives.

About six years ago, A.I. in a Google online photo service organized photos of Black people into a folder called “gorillas.” Four years ago, a researcher at a New York start-up noticed that the A.I. system she was working on was egregiously biased against Black people. Not long after, a Black researcher in Boston discovered that an A.I. system couldn’t identify her face — until she put on a white mask.

In 2018, when I told Google’s public relations staff that I was working on a book about artificial intelligence, it arranged a long talk with Dr. Mitchell to discuss her work. As she described how she built the company’s Ethical A.I. team — and brought Dr. Gebru into the fold — it was refreshing to hear from someone so closely focused on the bias problem.

But nearly three years later, Dr. Gebru was pushed out of the company without a clear explanation. She said she had been firedafter criticizing Google’s approach to minority hiring and, with a research paper, highlighting the harmful biases in the A.I. systems that underpin Google’s search engine and other services.

“Your life starts getting worse when you start advocating for underrepresented people,” Dr. Gebru said in an email before her firing. “You start making the other leaders upset.”

As Dr. Mitchell defended Dr. Gebru, the company removed her, too. She had searched through her own Google email account for material that would support their position and forwarded emails to another account, which somehow got her into trouble. Google declined to comment for this article.

Their departure became a point of contention for A.I. researchers and other tech workers. Some saw a giant company no longer willing to listen, too eager to get technology out the door without considering its implications. I saw an old problem — part technological and part sociological — finally breaking into the open.

It should have been a wake-up call.

In June 2015, a friend sent Jacky Alciné, a 22-year-old software engineer living in Brooklyn, an internet link for snapshots the friend had posted to the new Google Photos service. Google Photos could analyze snapshots and automatically sort them into digital folders based on what was pictured. One folder might be “dogs,” another “birthday party.”

When Mr. Alciné clicked on the link, he noticed one of the folders was labeled “gorillas.” That made no sense to him, so he opened the folder. He found more than 80 photos he had taken nearly a year earlier of a friend during a concert in nearby Prospect Park. That friend was Black.

He might have let it go if Google had mistakenly tagged just one photo. But 80? He posted a screenshot on Twitter. “Google Photos, y’all,” messed up, he wrote, using much saltier language. “My friend is not a gorilla.”

Like facial recognition services, talking digital assistants and conversational “chatbots,” Google Photos relied on an A.I. system that learned its skills by analyzing enormous amounts of digital data.

Called a “neural network,” this mathematical system could learn tasks that engineers could never code into a machine on their own. By analyzing thousands of photos of gorillas, it could learn to recognize a gorilla. It was also capable of egregious mistakes. The onus was on engineers to choose the right data when training these mathematical systems. (In this case, the easiest fix was to eliminate “gorilla” as a photo category.)

As a software engineer, Mr. Alciné understood the problem. He compared it to making lasagna. “If you mess up the lasagna ingredients early, the whole thing is ruined,” he said. “It is the same thing with A.I. You have to be very intentional about what you put into it. Otherwise, it is very difficult to undo.”

In 2017, Deborah Raji, a 21-​year-​old Black woman from Ottawa, sat at a desk inside the New York offices of Clarifai, the start-up where she was working. The company built technology that could automatically recognize objects in digital images and planned to sell it to businesses, police departments and government agencies.

She stared at a screen filled with faces — images the company used to train its facial recognition software.

As she scrolled through page after page of these faces, she realized that most — more than 80 percent — were of white people. More than 70 percent of those white people were male. When Clarifai trained its system on this data, it might do a decent job of recognizing white people, Ms. Raji thought, but it would fail miserably with people of color, and probably women, too.

Clarifai was also building a “content moderation system,” a tool that could automatically identify and remove pornography from images people posted to social networks. The company trained this system on two sets of data: thousands of photos pulled from online pornography sites, and thousands of G‑rated images bought from stock photo services.

The system was supposed to learn the difference between the pornographic and the anodyne. The problem was that the G‑rated images were dominated by white people, and the pornography was not. The system was learning to identify Black people as pornographic.

“The data we use to train these systems matters,” Ms. Raji said. “We can’t just blindly pick our sources.”

This was obvious to her, but to the rest of the company it was not. Because the people choosing the training data were mostly white men, they didn’t realize their data was biased.

“The issue of bias in facial recognition technologies is an evolving and important topic,” Clarifai’s chief executive, Matt Zeiler, said in a statement. Measuring bias, he said, “is an important step.”

Before joining Google, Dr. Gebru collaborated on a study with a young computer scientist, Joy Buolamwini. A graduate student at the Massachusetts Institute of Technology, Ms. Buolamwini, who is Black, came from a family of academics. Her grandfather specialized in medicinal chemistry, and so did her father.

She gravitated toward facial recognition technology. Other researchers believed it was reaching maturity, but when she used it, she knew it wasn’t.

In October 2016, a friend invited her for a night out in Boston with several other women. “We’ll do masks,” the friend said. Her friend meant skin care masks at a spa, but Ms. Buolamwini assumed Halloween masks. So she carried a white plastic Halloween mask to her office that morning.

It was still sitting on her desk a few days later as she struggled to finish a project for one of her classes. She was trying to get a detection system to track her face. No matter what she did, she couldn’t quite get it to work.

In her frustration, she picked up the white mask from her desk and pulled it over her head. Before it was all the way on, the system recognized her face — or, at least, it recognized the mask.

“Black Skin, White Masks,” she said in an interview, nodding to the 1952 critique of historical racism from the psychiatrist Frantz Fanon. “The metaphor becomes the truth. You have to fit a norm, and that norm is not you.”

Ms. Buolamwini started exploring commercial services designed to analyze faces and identify characteristics like age and sex, including tools from Microsoft and IBM.

She found that when the services read photos of lighter-​skinned men, they misidentified sex about 1 percent of the time. But the darker the skin in the photo, the larger the error rate. It rose particularly high with images of women with dark skin. Microsoft’s error rate was about 21 percent. IBM’s was 35.

Published in the winter of 2018, the study drove a backlash against facial recognition technology and, particularly, its use in law enforcement. Microsoft’s chief legal officer said the company had turned down sales to law enforcement when there was concern the technology could unreasonably infringe on people’s rights, and he made a public call for government regulation.

Twelve months later, Microsoft backed a bill in Washington State that would require notices to be posted in public places using facial recognition and ensure that government agencies obtained a court order when looking for specific people. The bill passed, and it takes effect later this year. The company, which did not respond to a request for comment for this article, did not back other legislation that would have provided stronger protections.

Ms. Buolamwini began to collaborate with Ms. Raji, who moved to M.I.T. They started testing facial recognition technology from a third American tech giant: Amazon. The company had started to market its technology to police departments and government agencies under the name Amazon Rekognition.

Ms. Buolamwini and Ms. Raji published a study showing that an Amazon face service also had trouble identifying the sex of female and darker-​skinned faces. According to the study, the service mistook women for men 19 percent of the time and misidentified darker-​skinned women for men 31 percent of the time. For lighter-​skinned males, the error rate was zero.

Amazon called for government regulation of facial recognition. It also attacked the researchers in private emails and public blog posts.

“The answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media,” an Amazon executive, Matt Wood, wrote in a blog post that disputed the study and a New York Times article that described it.

In an open letter, Dr. Mitchell and Dr. Gebru rejected Amazon’s argument and called on it to stop selling to law enforcement. The letter was signed by 25 artificial intelligence researchers from Google, Microsoft and academia.

Last June, Amazon backed down. It announced that it would not let the police use its technology for at least a year, saying it wanted to give Congress time to create rules for the ethical use of the technology. Congress has yet to take up the issue. Amazon declined to comment for this article.

Dr. Gebru and Dr. Mitchell had less success fighting for change inside their own company. Corporate gatekeepers at Google were heading them off with a new review system that had lawyers and even communications staff vetting research papers.

Dr. Gebru’s dismissal in December stemmed, she said, from the company’s treatment of a research paper she wrote alongside six other researchers, including Dr. Mitchell and three others at Google. The paper discussed ways that a new type of language technology, including a system built by Google that underpins its search engine, can show bias against women and people of color.

After she submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper or remove the names of Google employees. She said she would resign if the company could not tell her why it wanted her to retract the paper and answer other concerns.

The response: Her resignation was accepted immediately, and Google revoked her access to company email and other services. A month later, it removed Dr. Mitchell’s access after she searched through her own email in an effort to defend Dr. Gebru.

In a Google staff meeting last month, just after the company fired Dr. Mitchell, the head of the Google A.I. lab, Jeff Dean, said the company would create strict rules meant to limit its review of sensitive research papers. He also defended the reviews. He declined to discuss the details of Dr. Mitchell’s dismissal but said she had violated the company’s code of conduct and security policies.

One of Mr. Dean’s new lieutenants, Zoubin Ghahramani, said the company must be willing to tackle hard issues. There are “uncomfortable things that responsible A.I. will inevitably bring up,” he said. “We need to be comfortable with that discomfort.”

But it will be difficult for Google to regain trust — both inside the company and out.

“They think they can get away with firing these people and it will not hurt them in the end, but they are absolutely shooting themselves in the foot,” said Alex Hanna, a longtime part of Google’s 10-member Ethical A.I. team. “What they have done is incredibly myopic.”

Source: https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html

The Robots Are Coming for Phil in Accounting

Implications for many white collar workers, including in government given the nature of repetitive operational work:

The robots are coming. Not to kill you with lasers, or beat you in chess, or even to ferry you around town in a driverless Uber.

These robots are here to merge purchase orders into columns J and K of next quarter’s revenue forecast, and transfer customer data from the invoicing software to the Oracle database. They are unassuming software programs with names like “Auxiliobits — DataTable To Json String,” and they are becoming the star employees at many American companies.

Some of these tools are simple apps, downloaded from online stores and installed by corporate I.T. departments, that do the dull-but-critical tasks that someone named Phil in Accounting used to do: reconciling bank statements, approving expense reports, reviewing tax forms. Others are expensive, custom-built software packages, armed with more sophisticated types of artificial intelligence, that are capable of doing the kinds of cognitive work that once required teams of highly-paid humans.

White-collar workers, armed with college degrees and specialized training, once felt relatively safe from automation. But recent advances in A.I. and machine learning have created algorithms capable of outperforming doctorslawyers and bankers at certain parts of their jobs. And as bots learn to do higher-value tasks, they are climbing the corporate ladder.

The trend — quietly building for years, but accelerating to warp speed since the pandemic — goes by the sleepy moniker “robotic process automation.” And it is transforming workplaces at a pace that few outsiders appreciate. Nearly 8 in 10 corporate executives surveyed by Deloitte last year said they had implemented some form of R.P.A. Another 16 percent said they planned to do so within three years.

Most of this automation is being done by companies you’ve probably never heard of. UiPath, the largest stand-alone automation firm, is valued at $35 billion — roughly the size of eBay — and is slated to go public later this year. Other companies like Automation Anywhere and Blue Prism, which have Fortune 500 companies like Coca-Cola and Walgreens Boots Alliance as clients, are also enjoying breakneck growth, and tech giants like Microsoft have recently introduced their own automation products to get in on the action.

Executives generally spin these bots as being good for everyone, “streamlining operations” while “liberating workers” from mundane and repetitive tasks. But they are also liberating plenty of people from their jobs. Independent experts say that major corporate R.P.A. initiatives have been followed by rounds of layoffs, and that cutting costs, not improving workplace conditions, is usually the driving factor behind the decision to automate. 

Craig Le Clair, an analyst with Forrester Research who studies the corporate automation market, said that for executives, much of the appeal of R.P.A. bots is that they are cheap, easy to use and compatible with their existing back-end systems. He said that companies often rely on them to juice short-term profits, rather than embarking on more expensive tech upgrades that might take years to pay for themselves.

“It’s not a moonshot project like a lot of A.I., so companies are doing it like crazy,” Mr. Le Clair said. “With R.P.A., you can build a bot that costs $10,000 a year and take out two to four humans.”

Covid-19 has led some companies to turn to automation to deal with growing demand, closed offices, or budget constraints. But for other companies, the pandemic has provided cover for executives to implement ambitious automation plans they dreamed up long ago.

“Automation is more politically acceptable now,” said Raul Vega, the chief executive of Auxis, a firm that helps companies automate their operations.

Before the pandemic, Mr. Vega said, some executives turned down offers to automate their call centers, or shrink their finance departments, because they worried about scaring their remaining workers or provoking a backlash like the one that followed the outsourcing boom of the 1990s, when C.E.O.s became villains for sending jobs to Bangalore and Shenzhen.

But those concerns matter less now, with millions of people already out of work and many businesses struggling to stay afloat.

Now, Mr. Vega said, “they don’t really care, they’re just going to do what’s right for their business,” Mr. Vega said.

Sales of automation software are expected to rise by 20 percent this year, after increasing by 12 percent last year, according to the research firm Gartner. And the consulting firm McKinsey, which predicted before the pandemic that 37 million U.S. workers would be displaced by automation by 2030, recently increased its projection to 45 million.

A white-collar wake-up call

Not all bots are the job-destroying kind. Holly Uhl, a technology manager at State Auto Insurance Companies, said that her firm has used automation to do 173,000 hours’ worth of work in areas like underwriting and human resources without laying anyone off.

“People are concerned that there’s a possibility of losing their jobs, or not having anything to do,” she said. “But once we have a bot in the area, and people see how automation is applied, they’re truly thrilled that they don’t have to do that work anymore.”

As bots become capable of complex decision-making, rather than doing single repetitive tasks, their disruptive potential is growing.

Recent studies by researchers at Stanford University and the Brookings Institution compared the text of job listings with the wording of A.I.-related patents, looking for phrases like “make prediction” and “generate recommendation” that appeared in both. They found that the groups with the highest exposure to A.I. were better-paid, better-educated workers in technical and supervisory roles, with men, white and Asian-American workers, and midcareer professionals being some of the most endangered. Workers with bachelor’s or graduate degrees were nearly four times as exposed to A.I. risk as those with just a high school degree, the researchers found, and residents of high-tech cities like Seattle and Salt Lake City were more vulnerable than workers in smaller, more rural communities.

“A lot of professional work combines some element of routine information processing with an element of judgment and discretion,” said David Autor, an economist at M.I.T. who studies the labor effects of automation. “That’s where software has always fallen short. But with A.I., that type of work is much more in the kill path.”

Many of those vulnerable workers don’t see this coming, in part because the effects of white-collar automation are often couched in jargon and euphemism. On their websites, R.P.A. firms promote glowing testimonials from their customers, often glossing over the parts that involve actual humans.

“Sprint Automates 50 Business Processes In Just Six Months.” (Possible translation: Sprint replaced 300 people in the billing department.)

“Dai-ichi Life Insurance Saves 132,000 Hours Annually” (Bye-bye, claims adjusters.)

“600% Productivity Gain for Credit Reporting Giant with R.P.A.”(Don’t let the door hit you, data analysts.)

Jason Kingdon, the chief executive of the R.P.A. firm Blue Prism, speaks in the softened vernacular of displacement too. He refers to his company’s bots as “digital workers,” and he explained that the economic shock of the pandemic had “massively raised awareness” among executives about the variety of work that no longer requires human involvement.

“We think any business process can be automated,” he said.

Mr. Kingdon tells business leaders that between half and two-thirds of all the tasks currently being done at their companies can be done by machines. Ultimately, he sees a future in which humans will collaborate side-by-side with teams of digital employees, with plenty of work for everyone, although he conceded that the robots have certain natural advantages.

“A digital worker,” he said, “can be scaled in a vastly more flexible way.”

Humans have feared losing our jobs to machines for millennia. (In 350 BCE, Aristotle worried that self-playing harps would make musicians obsolete.) And yet, automation has never created mass unemployment, in part because technology has always generated new jobs to replace the ones it destroyed.

During the 19th and 20th centuries, some lamplighters and blacksmiths became obsolete, but more people were able to make a living as electricians and car dealers. And today’s A.I. optimists argue that while new technology may displace some workers, it will spur economic growth and create better, more fulfilling jobs, just as it has in the past.

But that is no guarantee, and there is growing evidence that this time may be different.

In a series of recent studies, Daron Acemoglu of M.I.T. and Pascual Restrepo of Boston University, two well-respected economists who have researched the history of automation, found that for most of the 20th century, the optimistic take on automation prevailed — on average, in industries that implemented automation, new tasks were created faster than old ones were destroyed.

Since the late 1980s, they found, the equation had flipped — tasks have been disappearing to automation faster than new ones are appearing.

This shift may be related to the popularity of what they call “so-so automation” — technology that is just barely good enough to replace human workers, but not good enough to create new jobs or make companies significantly more productive.

A common example of so-so automation is the grocery store self-checkout machine. These machines don’t cause customers to buy more groceries, or help them shop significantly faster — they simply allow store owners to staff slightly fewer employees on a shift. This simple, substitutive kind of automation, Mr. Acemoglu and Mr. Restrepo wrote, threatens not just individual workers, but the economy as a whole.

“The real danger for labor,” they wrote, “may come not from highly productive but from ‘so-so’ automation technologies that are just productive enough to be adopted and cause displacement.”

Only the most devoted Luddites would argue against automating any job, no matter how menial or dangerous. But not all automation is created equal, and much of the automation being done in white-collar workplaces today is the kind that may not help workers over the long run.

During past eras of technological change, governments and labor unions have stepped in to fight for automation-prone workers, or support them while they trained for new jobs. But this time, there is less in the way of help. Congress has rejected calls to fund federal worker retraining programs for years, and while some of the money in the $1.9 trillion Covid-19 relief bill Democrats hope to pass this week will go to laid-off and furloughed workers, none of it is specifically earmarked for job training programs that could help displaced workers get back on their feet.

Another key difference is that in the past, automation arrived gradually, factory machine by factory machine. But today’s white-collar automation is so sudden — and often, so deliberately obscured by management — that few workers have time to prepare.

“The rate of progression of this technology is faster than any previous automation,” said Mr. Le Clair, the Forrester analyst, who thinks we are closer to the beginning than the end of the corporate A.I. boom.

“We haven’t hit the exponential point of this stuff yet,” he added. “And when we do, it’s going to be dramatic.”

The corporate world’s automation fever isn’t purely about getting rid of workers. Executives have shareholders and boards to satisfy, and competitors to keep up with. And some automation does, in fact, lift all boats, making workers’ jobs better and more interesting while allowing companies to do more with less.

But as A.I. enters the corporate world, it is forcing workers at all levels to adapt, and focus on developing the kinds of distinctly human skills that machines can’t easily replicate.

Ellen Wengert, a former data processor at an Australian insurance firm, learned this lesson four years ago, when she arrived at work one day to find a bot-builder sitting in her seat.

The man, coincidentally an old classmate of hers, worked for a consulting firm that specialized in R.P.A. He explained that he’d been hired to automate her job, which mostly involved moving customer data from one database to another. He then asked her to, essentially, train her own replacement — teaching him how to do the steps involved in her job so that he, in turn, could program a bot to do the same thing.

Ms. Wengert wasn’t exactly surprised. She’d known that her job was straightforward and repetitive, making it low-hanging fruit for automation. But she was annoyed that her managers seemed so eager to hand it over to a machine.

“They were desperate to create this sense of excitement around automation,” she said. “Most of my colleagues got on board with that pretty readily, but I found it really jarring, to be feigning excitement about us all potentially losing our jobs.”

For Ms. Wengert, 27, the experience was a wake-up call. She had a college degree and was early in her career. But some of her colleagues had been happily doing the same job for years, and she worried that they would fall through the cracks.

“Even though these aren’t glamorous jobs, there are a lot of people doing them,” she said.

She left the insurance company after her contract ended. And she now works as a second-grade teacher — a job she says she sought out, in part, because it seemed harder to automate.

Source: https://www.nytimes.com/2021/03/06/business/the-robots-are-coming-for-phil-in-accounting.html

India’s Vaccine Rollout Stumbles as COVID-19 Cases Decline. That’s Bad News for the Rest of the World

Of note:

India’s COVID-19 vaccination scheme looked set for success.

For the “pharmacy of the world,” which produced 60% of the vaccines for global use before the pandemic, supply was never going to be a problem. The country already had the world’s largest immunization program, delivering 390 million doses annually to protect against diseases like tuberculosis and measles, and an existing infrastructure that would make COVID-19 vaccine distribution easier. Ahead of the launch, the government organized dry runs, put up billboards touting the vaccines and replaced phone ringing tones with a message urging people to get vaccinated.

And yet, one month into its vaccination campaign, India is struggling to get even its health workers to line up for shots. In early January, India announced a goal to inoculate 300 million people by August. Just 8.4 million received a vaccine in the first month, less than a quarter of the number needed to stay on pace for the government’s goal. So far, vaccinations are only available for frontline health workers, and in some places police officers and soldiers.

And even that initial interest might be waning. India’s vaccine scheme relies on a mobile phone app that schedules vaccination appointments. On the first day doses were administered, Jan. 16, some 191,000 people showed up. But four weeks later, when those people were summoned for the second dose, only only 4% returned.

A. Valsala, a community health worker in the southern city of Kollam who spent months fighting COVID-19 door-to-door, skipped her appointment to get her first dose of the vaccine after a hectic day on Feb. 12. “I don’t feel the need to rush because the worst is over,” she says. “So there is a sense that it is okay to wait and watch since there are concerns about how these vaccines were developed so fast.”

A. Valsala’s comments point to a troubling trend—one reflected in TIME’s interviews with health workers across India. A combination of waning COVID-19 cases nationwide, questions over the efficacy of one of the two vaccines currently authorized for use in the country and complacency are resulting in growing hesitancy to get vaccinated.

“There is a reduced perception of threat with regard to the virus,” says Dr. Chandrakant Lahariya, a New Delhi-based epidemiologist. “Had the same vaccines been available during the peak of the pandemic in September and October, the uptake would have been different.”

A troubling sign for the rest of the world

Public health experts are now concerned that the sluggish start could impact the subsequent phases of the vaccination drive, especially when the vaccination scheme is widened next month to include older people and those with preexisting conditions.

“In India, people have an inherent trust in doctors,” says Dr. Smisha Agarwal, Research Director at the Johns Hopkins Global mHealth Initiative. “So when [doctors] don’t turn up to get vaccines, it reaffirms any doubts that the general public might have.”

In an effort to accelerate the vaccination drive, the government started walk-in vaccinations as opposed to allowing only those scheduled for the day to get the shots. It also set up new vaccination centers across the country.

For now, India might be an outlier: a country with a surfeit of vaccines with few takers. But its experience shows that, while the first challenge is stocking up on vaccine supplies, convincing people to take them can be its own huge task. It might be a portent for the rest of the world as the number of COVID-19 cases decline globally and vaccines become more widely available, warns Dr. Paul Griffin, an infectious diseases specialist at the University of Queensland in Brisbane.

It’s easy to be complacent about getting a vaccine when cases are declining,Griffin says, “but now, when the trajectory looks favorable, is the right time to step back and realize that this will be our reality for a long time if we don’t speed up the vaccinations at this moment.”

How India fell behind on vaccinations

Despite being well-positioned, India’s vaccination drive got off to a rough start. The hasty approval of the country’s homegrown vaccine, Covaxin, with little data available while Phase 3 trials were still underway (those remain ongoing) drew criticism from health workers and scientists. The mainstay of India’s vaccination scheme is Covishield, the Indian variant of the vaccine developed by University of Oxford and AstraZeneca, which has been approved by regulators in the U.K., the E.U. and elsewhere. However, Covaxin is the only vaccine on offer in some vaccination centers in urban areas and health workers don’t get to choose which jab they receive.

“Covaxin might be efficacious but what guides me is data,” says Dr. Nirmalya Mohapatra at the Dr. Ram Manohar Lohia Hospital in New Delhi, where only Covaxin is available. “We also want vaccines faster because we have seen deaths because of this disease but that doesn’t mean we should cut corners with the data.” Mohapatra has refused to take Covaxin until more data is available.

But even for Covishield, there aren’t as many takers as expected. In the western city of Nagpur, fewer than 36% of those scheduled to take the vaccine turned up Feb. 11, as per a Times of India report. In the north, the city of Chandigarh is planning to set up counselling centres to dispel fearsabout the vaccines. In a hospital in the southern city of Thrissur, Dr. Pradeep Gopalakrishnan was the last one to get the vaccine on the morning of Feb. 8. “No one came in after me, so around 69 doses set aside for the day remained unused,” he says.

Experts say the lack of enthusiasm could also be attributed to a decline in cases. India’s daily case average has dropped to less than 12,000—down from more than 90,000 in September. At the peak of the pandemic, health care systems were overwhelmed, with shortages of hospital beds and oxygen cylinders being reported across the country. India’s official COVID-19 tally, now at nearly 11 million, surged to No. 2 in the world, behind the U.S (where it remains to this day).

In a Feb. 4 press conference, the Indian Council of Medical Research said that more than 20% of subjects over age 18 from across the country tested in late December and early January had antibodies for the coronavirus that causes COVID-19, meaning they likely had the disease and recovered. Similar studies in Mumbai and Delhi showed even higher levels of antibodies—up to 56%, according to Delhi’s health minister. Several health workers interviewed by TIME said they contracted COVID-19, and were less concerned about getting the vaccine immediately because they believe they have immunity.

But health experts warn India is far from herd immunity. And many worry that people not taking vaccines seriously might not bode well for India, given that other countries’ later waves of COVID-19 were even more severe than those early in the pandemic. Already, Maharashtra, the worst-hit state in the country, has seen a COVID-19 spike in recent days, with daily casesabove 5,000 on Feb. 18 for the first in two and a half months

‘The worst is not over yet’

On a global level too, the tendency to let the guard down might hamper efforts to bring the pandemic under control. Experts say vaccination is necessary not only to get long-term immunity but to also reduce the potential for new mutations, which are largely behind recent surges in cases in the U.K and Brazil.

“High vaccination coverage rate reduces the potential for new variants,” says Griffin of the University of Queensland. “The more cases we have in circulation, the more chances there are of generating mutations that confer some kind of benefit to the virus.”

Even in countries like the U.S. and the U.K., where vaccination started during a surge in cases, there is a risk that people lose enthusiasm once cases decline. Experts emphasize the need for better communication with the public to ensure that vaccination drives don’t slow down with COVID-19 case counts.

“There isn’t any time to wait because the worst is not over yet,” says Agarwal of Johns Hopkins. “Despite the fatigue, ramping up the vaccination is the only and best weapon we have against what might otherwise be a very long winter.”

Source: India’s Vaccine Rollout Stumbles as COVID-19 Cases Decline. That’s Bad News for the Rest of the World

American Life Expectancy Dropped By A Full Year In The First Half Of 2020

Telling. Haven’t seen any comparative Canadian data but likely a similar but smaller effect:

The average U.S. life expectancy dropped by a year in the first half of 2020, according to a new report from the National Center for Health Statistics, a part of the Centers for Disease Control and Prevention.

Life expectancy at birth for the total U.S. population was 77.8 years – a decline of 1 year from 78.8 in 2019. For males, the life expectancy at birth was 75.1 – a decline of 1.2 years from 2019. For females, life expectancy declined to 80.5 years, a 0.9 year decrease from 2019.

Deaths from COVID-19 are the main factor in the overall drop in U.S. life expectancy between January and June 2020, the CDC says. But it’s not the only one: a surge in drug overdose deaths are a part of the decline, too.

“If you’ll recall, in recent pre-pandemic years there were slight drops in life expectancy due in part to the rise in overdose deaths,” explains NCHC spokesperson Jeff Lancashire in an email to NPR. “So they are likely contributing here as well but we don’t know to what degree. COVID-19 is responsible for an estimated 2/3 of all excess deaths in 2020, and excess deaths are driving the decline.”

The group that suffered the largest decline was non-Hispanic Black males, whose life expectancy dropped by 3 years. Hispanic males also saw a large decrease in life expectancy, with a decline of 2.4 years. Non-Hispanic Black females saw a life expectancy decline of 2.3 years, and Hispanic females faced a decline of 1.1 years.

Throughout the coronavirus pandemic, Black and Latino Americans have died from COVID-19 at disproportionately high rates.

The life expectancy decline was less pronounced among non-Hispanic whites: males in that group had a decline of life expectancy of 0.8 year, while for white females the decline was 0.7 year.

Women tend to live longer than men, and in the first half of 2020, that margin grew: the difference in their life expectancy widened to 5.4 years, from 5.1 in 2019.

The report estimated life expectancy in the U.S. based on provisional death counts for January to June 2020. Because the NCHS wanted to assess the effects of 2020’s increase in deaths, for the first time it published its life expectancy tables based on provisional death certificate data, rather than final counts.

Its authors point out a few limitations in these estimates. One is that the data is from the first six months of 2020 – so it does not reflect the entirety of the COVID-19 pandemic. There is also seasonality in death patterns, with more deaths generally happening in winter than summer. This half-year data does not account for that.

Another limitation is that the COVID-19 pandemic struck different parts of the U.S. at different times in the year. The areas most affected in the first half of 2020 are more urban and have different demographics than the areas hit hard by the virus later in the year.

As a result, the authors write, “life expectancy at birth for the first half of 2020 may be underestimated since the populations more severely affected, Hispanic and non-Hispanic black populations, are more likely to live in urban areas.”

The report parallels the findings published last month by researchers at the University of Southern California and Princeton University, which found that the deaths caused by COVID-19 have reduced overall life expectancy by 1.13 years.

In the U.S., more than 488,000 people have died from COVID-19. The latest estimates from the University of Washington’s Institute of Health Metrics and Evaluation predict 614,503 U.S. deaths by June 1.

Source: American Life Expectancy Dropped By A Full Year In The First Half Of 2020