Mims: The AI Boom That Could Make Google and Microsoft Even More Powerful

Good long read. Hard to be optimistic about how the technology will be used. And the regulators will likely be more than a few steps behind corporations:

Seeing the new artificial intelligence-powered chatbots touted in dueling announcements this past week by Microsoft and Googledrives home two major takeaways. First, the feeling of “wow, this definitely could change everything.” And second, the realization that for chat-based search and related AI technologies to have an impact, we’re going to have to put a lot of faith in them and the companies they come from.

When AI is delivering answers, and not just information for us to base decisions on, we’re going to have to trust it much more deeply than we have before. This new generation of chat-based search engines are better described as “answer engines” that can, in a sense, “show their work” by giving links to the webpages they deliver and summarize. But for an answer engine to have real utility, we’re going to have to trust it enough, most of the time, that we accept those answers at face value.

The same will be true of tools that help generate text, spreadsheets, code, images and anything else we create on our devices—some version of which both Microsoft MSFT -0.20%decrease; red down pointing triangle and Google have promised to offer within their existing productivity services, Microsoft 365 and Google Workspace. 

These technologies, and chat-based search, are all based on the latest generation of “generative” AI, capable of creating verbal and visual content and not just processing it the way more established AI has done. And the added trust it will require is one of several ways in which this new generative AI technology is poised to shift even more power into the hands of the biggest tech companies

Generative AI in all its forms will insinuate technology more deeply into the way we live and work than it already is—not just answering our questions but writing our memos and speeches or even producing poetry and art. And because of the financial, intellectual and computational resources needed to develop and run the technology are so enormous, the companies that control these AI systems will be the largest, richest companies.

OpenAI, the creator of the ChatGPT chatbot and DALL-E 2 image generator AIs that have fueled much of the current hype, seemed like an exception to that: a relatively small startup that has driven major AI innovation. But it has leapt into the arms of Microsoft, which has made successive rounds of investment, in part because of the need to pay for the computing power needed to make its systems work. 

The greater concentration of power is all the more important because this technology is both incredibly powerful and inherently flawed: it has a tendency to confidently deliver incorrect information. This means that step one in making this technology mainstream is building it, and step two is minimizing the variety and number of mistakes it inevitably makes.

Trust in AI, in other words, will become the new moat that big technology companies will fight to defend. Lose the user’s trust often enough, and they might abandon your product. For example: In November, Meta made available to the public an AI chat-based search engine for scientific knowledge called Galactica. Perhaps it was in part the engine’s target audience—scientists—but the incorrect answers it sometimes offered inspired such withering criticism that Meta shut down public access to it after just three days, said Meta chief AI scientist Yann LeCun in a recent talk.

Galactica was “the output of a research project versus something intended for commercial use,” says a Meta spokeswoman. In a public statement, Joelle Pineau, managing director of fundamental AI research at Meta, wrote that “given the propensity of large language models such as Galactica to generate text that may appear authentic, but is inaccurate, and because it has moved beyond the research community, we chose to remove the demo from public availability.”

On the other hand, proving your AI more trustworthy could be a competitive advantage more powerful than being the biggest, best or fastest repository of answers. This seems to be Google’s bet, as the company has emphasized in recent announcements and a presentation on Wednesday that as it tests and rolls out its own chat-based and generative AI systems, it will strive for “Responsible AI,” as outlined in 2019 in its “AI Principles.”

My colleague Joanna Stern this past week provided a helpful description of what it’s like to use Microsoft’s Bing search engine and Edge web browser with ChatGPT incorporated. You can join a list to test the service—and Google says it will make its chatbot, named Bard, available at some point in the coming months.

But in the meantime, to see just why trust in these kinds of search engines is so tricky, you can visit other chat-based search engines that already exist. There’s You.com, which will answer your questions via a chatbot, or Andisearch.com, which will summarize any article it returns when you search for a topic on it.

Even these smaller services feel a little like magic. If you ask You.com’s chat module a question like “Please list the best chat AI-based search engines,” it can, under the right circumstances, give you a coherent and succinct answer that includes all the best-known startups in this space. But it can also, depending on small changes in how you phrase that question, add complete nonsense to its answer. 

In my experimentation, You.com would, more often than not, give a reasonably accurate answer, but then add to it the name of a search engine that doesn’t exist at all. Googling the made-up search engine names it threw in revealed that You.com seemed to be misconstruing the names of humans quoted in articles as the names of search engines.

Andi doesn’t return search results in a chat format, precisely because making sure that those answers are accurate is still so difficult, says Chief Executive Angela Hoover. “It’s been super exciting to see these big players validating that conversational search is the future, but nailing factual accuracy is hard to do,” she adds. As a result, for now, Andi offers search results in a conventional format, but offers to use AI to summarize any page it returns.

Andi currently has a team of fewer than 10 people, and has raised $2.5 million so far. It’s impressive what such a small team has accomplished, but it’s clear that making trustworthy AI will require enormous resources, probably on the scale of what companies like Microsoft and Google possess.

There are two reasons for this: The first is the enormous amount of computing infrastructure required, says Tinglong Dai, a professor of operations management at Johns Hopkins University who studies human-AI interaction. That means tens of thousands of computers in big technology companies’ current cloud infrastructures. Some of those computers are used to train the enormous “foundation” models that power generative AI systems. Others specialize in making the trained models available to users, which as the number of users grows can become a more taxing task than the original training.

The second reason, says Dr. Dai, is that it requires enormous human resources to continually test and tune these models, in order to make sure they’re not spouting an inordinate amount of nonsense or biased and offensive speech.

Google has said that it has called on every employee in the company to test its new chat-based search engine and flag any issues with the results it generates. Microsoft, which is already rolling out its chat-based search engine to the public on a limited basis, is doing that kind of testing in public. ChatGPT, on which Microsoft’s chat-based search engine is based, has already proved to be vulnerable to attempts to “jailbreak” it into producing inappropriate content. 

Big tech companies can probably overcome the issues arising from their rollout of AI—Google’s go-slow approach, ChatGPT’s sometimes-inaccurate results, and the incomplete or misleading answers chat-based Bing could offer—by experimenting with these systems on a large scale, as only they can.

“The only reason ChatGPT and other foundational models are so bad at bias and even fundamental facts is they are closed systems, and there is no opportunity for feedback,” says Dr. Dai. Big tech companies like Google have decades of practice at soliciting feedback to improve their algorithmically-generated results. Avenues for such feedback have, for example, long been a feature of both Google Search and Google Maps.

Dr. Dai says that one analogy for the future of trust in AI systems could be one of the least algorithmically-generated sites on the internet: Wikipedia. While the entirely human-written and human-edited encyclopedia isn’t as trustworthy as primary-source material, its users generally know that and find it useful anyway. Wikipedia shows that “social solutions” to problems like trust in the output of an algorithm—or trust in the output of human Wikipedia editors—are possible.

But the model of Wikipedia also shows that the kind of labor-intensive solutions for creating trustworthy AI—which companies like Meta and Google have already employed for years and at scale in their content moderation systems—are likely to entrench the power of existing big technology companies. Only they have not just the computing resources, but also the human resources, to deal with all the misleading, incomplete or biased information their AIs will be generating.

In other words, creating trust by moderating the content generated by AIs might not prove to be so different from creating trust by moderating the content generated by humans. And that is something the biggest technology companies have already shown is a difficult, time-consuming and resource-intensive task they can take on in a way that few other companies can.

The obvious and immediate utility of these new kinds of AIs, when integrated into a search engine or in their many other potential applications, is the reason for the current media, analyst and investor frenzy for AI. It’s clear that this could be a disruptive technology, resetting who is harvesting attention and where they’re directing it, threatening Google’s search monopoly and opening up new markets and new sources of revenue for Microsoft and others.

Based on the runaway success of the ChatGPT AI—perhaps the fastest service to reach 100 million users in history, according to a recent UBS report—it’s clear that being an aggressive first mover in this space could matter a great deal. It’s also clear that being a successful first-mover in this space will require the kinds of resources that only the biggest tech companies can muster.

Source: The AI Boom That Could Make Google and Microsoft Even More Powerful

Who Is Making Sure the A.I. Machines Aren’t Racist?

Good overview of the issues and debates (Google’s earlier slogan of “do no evil” seems so quaint):

Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.

The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines.

But it is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.

In the nearly 10 years I’ve written about artificial intelligence, two things have remained a constant: The technology relentlessly improves in fits and sudden, great leaps forward. And bias is a thread that subtly weaves through that work in a way that tech companies are reluctant to acknowledge.

On her first night home in Menlo Park, Calif., after the Barcelona conference, sitting cross-​legged on the couch with her laptop, Dr. Gebru described the A.I. work force conundrum in a Facebook post.

“I’m not worried about machines taking over the world. I’m worried about groupthink, insularity and arrogance in the A.I. community — especially with the current hype and demand for people in the field,” she wrote. “The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”

The A.I. community buzzed about the mini-manifesto. Soon after, Dr. Gebru helped create a new organization, Black in A.I. After finishing her Ph.D., she was hired by Google.

She teamed with Margaret Mitchell, who was building a group inside Google dedicated to “ethical A.I.” Dr. Mitchell had previously worked in the research lab at Microsoft. She had grabbed attention when she told Bloomberg News in 2016 that A.I. suffered from a “sea of dudes” problem. She estimated that she had worked with hundreds of men over the previous five years and about 10 women.

Their work was hailed as groundbreaking. The nascent A.I. industry, it had become clear, needed minders and people with different perspectives.

About six years ago, A.I. in a Google online photo service organized photos of Black people into a folder called “gorillas.” Four years ago, a researcher at a New York start-up noticed that the A.I. system she was working on was egregiously biased against Black people. Not long after, a Black researcher in Boston discovered that an A.I. system couldn’t identify her face — until she put on a white mask.

In 2018, when I told Google’s public relations staff that I was working on a book about artificial intelligence, it arranged a long talk with Dr. Mitchell to discuss her work. As she described how she built the company’s Ethical A.I. team — and brought Dr. Gebru into the fold — it was refreshing to hear from someone so closely focused on the bias problem.

But nearly three years later, Dr. Gebru was pushed out of the company without a clear explanation. She said she had been firedafter criticizing Google’s approach to minority hiring and, with a research paper, highlighting the harmful biases in the A.I. systems that underpin Google’s search engine and other services.

“Your life starts getting worse when you start advocating for underrepresented people,” Dr. Gebru said in an email before her firing. “You start making the other leaders upset.”

As Dr. Mitchell defended Dr. Gebru, the company removed her, too. She had searched through her own Google email account for material that would support their position and forwarded emails to another account, which somehow got her into trouble. Google declined to comment for this article.

Their departure became a point of contention for A.I. researchers and other tech workers. Some saw a giant company no longer willing to listen, too eager to get technology out the door without considering its implications. I saw an old problem — part technological and part sociological — finally breaking into the open.

It should have been a wake-up call.

In June 2015, a friend sent Jacky Alciné, a 22-year-old software engineer living in Brooklyn, an internet link for snapshots the friend had posted to the new Google Photos service. Google Photos could analyze snapshots and automatically sort them into digital folders based on what was pictured. One folder might be “dogs,” another “birthday party.”

When Mr. Alciné clicked on the link, he noticed one of the folders was labeled “gorillas.” That made no sense to him, so he opened the folder. He found more than 80 photos he had taken nearly a year earlier of a friend during a concert in nearby Prospect Park. That friend was Black.

He might have let it go if Google had mistakenly tagged just one photo. But 80? He posted a screenshot on Twitter. “Google Photos, y’all,” messed up, he wrote, using much saltier language. “My friend is not a gorilla.”

Like facial recognition services, talking digital assistants and conversational “chatbots,” Google Photos relied on an A.I. system that learned its skills by analyzing enormous amounts of digital data.

Called a “neural network,” this mathematical system could learn tasks that engineers could never code into a machine on their own. By analyzing thousands of photos of gorillas, it could learn to recognize a gorilla. It was also capable of egregious mistakes. The onus was on engineers to choose the right data when training these mathematical systems. (In this case, the easiest fix was to eliminate “gorilla” as a photo category.)

As a software engineer, Mr. Alciné understood the problem. He compared it to making lasagna. “If you mess up the lasagna ingredients early, the whole thing is ruined,” he said. “It is the same thing with A.I. You have to be very intentional about what you put into it. Otherwise, it is very difficult to undo.”

In 2017, Deborah Raji, a 21-​year-​old Black woman from Ottawa, sat at a desk inside the New York offices of Clarifai, the start-up where she was working. The company built technology that could automatically recognize objects in digital images and planned to sell it to businesses, police departments and government agencies.

She stared at a screen filled with faces — images the company used to train its facial recognition software.

As she scrolled through page after page of these faces, she realized that most — more than 80 percent — were of white people. More than 70 percent of those white people were male. When Clarifai trained its system on this data, it might do a decent job of recognizing white people, Ms. Raji thought, but it would fail miserably with people of color, and probably women, too.

Clarifai was also building a “content moderation system,” a tool that could automatically identify and remove pornography from images people posted to social networks. The company trained this system on two sets of data: thousands of photos pulled from online pornography sites, and thousands of G‑rated images bought from stock photo services.

The system was supposed to learn the difference between the pornographic and the anodyne. The problem was that the G‑rated images were dominated by white people, and the pornography was not. The system was learning to identify Black people as pornographic.

“The data we use to train these systems matters,” Ms. Raji said. “We can’t just blindly pick our sources.”

This was obvious to her, but to the rest of the company it was not. Because the people choosing the training data were mostly white men, they didn’t realize their data was biased.

“The issue of bias in facial recognition technologies is an evolving and important topic,” Clarifai’s chief executive, Matt Zeiler, said in a statement. Measuring bias, he said, “is an important step.”

Before joining Google, Dr. Gebru collaborated on a study with a young computer scientist, Joy Buolamwini. A graduate student at the Massachusetts Institute of Technology, Ms. Buolamwini, who is Black, came from a family of academics. Her grandfather specialized in medicinal chemistry, and so did her father.

She gravitated toward facial recognition technology. Other researchers believed it was reaching maturity, but when she used it, she knew it wasn’t.

In October 2016, a friend invited her for a night out in Boston with several other women. “We’ll do masks,” the friend said. Her friend meant skin care masks at a spa, but Ms. Buolamwini assumed Halloween masks. So she carried a white plastic Halloween mask to her office that morning.

It was still sitting on her desk a few days later as she struggled to finish a project for one of her classes. She was trying to get a detection system to track her face. No matter what she did, she couldn’t quite get it to work.

In her frustration, she picked up the white mask from her desk and pulled it over her head. Before it was all the way on, the system recognized her face — or, at least, it recognized the mask.

“Black Skin, White Masks,” she said in an interview, nodding to the 1952 critique of historical racism from the psychiatrist Frantz Fanon. “The metaphor becomes the truth. You have to fit a norm, and that norm is not you.”

Ms. Buolamwini started exploring commercial services designed to analyze faces and identify characteristics like age and sex, including tools from Microsoft and IBM.

She found that when the services read photos of lighter-​skinned men, they misidentified sex about 1 percent of the time. But the darker the skin in the photo, the larger the error rate. It rose particularly high with images of women with dark skin. Microsoft’s error rate was about 21 percent. IBM’s was 35.

Published in the winter of 2018, the study drove a backlash against facial recognition technology and, particularly, its use in law enforcement. Microsoft’s chief legal officer said the company had turned down sales to law enforcement when there was concern the technology could unreasonably infringe on people’s rights, and he made a public call for government regulation.

Twelve months later, Microsoft backed a bill in Washington State that would require notices to be posted in public places using facial recognition and ensure that government agencies obtained a court order when looking for specific people. The bill passed, and it takes effect later this year. The company, which did not respond to a request for comment for this article, did not back other legislation that would have provided stronger protections.

Ms. Buolamwini began to collaborate with Ms. Raji, who moved to M.I.T. They started testing facial recognition technology from a third American tech giant: Amazon. The company had started to market its technology to police departments and government agencies under the name Amazon Rekognition.

Ms. Buolamwini and Ms. Raji published a study showing that an Amazon face service also had trouble identifying the sex of female and darker-​skinned faces. According to the study, the service mistook women for men 19 percent of the time and misidentified darker-​skinned women for men 31 percent of the time. For lighter-​skinned males, the error rate was zero.

Amazon called for government regulation of facial recognition. It also attacked the researchers in private emails and public blog posts.

“The answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media,” an Amazon executive, Matt Wood, wrote in a blog post that disputed the study and a New York Times article that described it.

In an open letter, Dr. Mitchell and Dr. Gebru rejected Amazon’s argument and called on it to stop selling to law enforcement. The letter was signed by 25 artificial intelligence researchers from Google, Microsoft and academia.

Last June, Amazon backed down. It announced that it would not let the police use its technology for at least a year, saying it wanted to give Congress time to create rules for the ethical use of the technology. Congress has yet to take up the issue. Amazon declined to comment for this article.

Dr. Gebru and Dr. Mitchell had less success fighting for change inside their own company. Corporate gatekeepers at Google were heading them off with a new review system that had lawyers and even communications staff vetting research papers.

Dr. Gebru’s dismissal in December stemmed, she said, from the company’s treatment of a research paper she wrote alongside six other researchers, including Dr. Mitchell and three others at Google. The paper discussed ways that a new type of language technology, including a system built by Google that underpins its search engine, can show bias against women and people of color.

After she submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper or remove the names of Google employees. She said she would resign if the company could not tell her why it wanted her to retract the paper and answer other concerns.

The response: Her resignation was accepted immediately, and Google revoked her access to company email and other services. A month later, it removed Dr. Mitchell’s access after she searched through her own email in an effort to defend Dr. Gebru.

In a Google staff meeting last month, just after the company fired Dr. Mitchell, the head of the Google A.I. lab, Jeff Dean, said the company would create strict rules meant to limit its review of sensitive research papers. He also defended the reviews. He declined to discuss the details of Dr. Mitchell’s dismissal but said she had violated the company’s code of conduct and security policies.

One of Mr. Dean’s new lieutenants, Zoubin Ghahramani, said the company must be willing to tackle hard issues. There are “uncomfortable things that responsible A.I. will inevitably bring up,” he said. “We need to be comfortable with that discomfort.”

But it will be difficult for Google to regain trust — both inside the company and out.

“They think they can get away with firing these people and it will not hurt them in the end, but they are absolutely shooting themselves in the foot,” said Alex Hanna, a longtime part of Google’s 10-member Ethical A.I. team. “What they have done is incredibly myopic.”

Source: https://www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html

Ousted Black Google Researcher: ‘They Wanted To Have My Presence, But Not Me Exactly’

More on the Google controversy (whose initial code included “Don’t do evil,” removed in 2018):

When Google unceremoniously ousted Black researcher Timnit Gebru, she felt targeted.

“My theory is that they had wanted me out for a while because I spoke up a lot about issues related to black people, women, and marginalization,” Gebru said in an interview on NPR’s Morning Edition.

At Google, Gebru was the co-lead of the company’s Ethical Artificial Intelligence team, where she was able to parlay her passion for highlighting the societal effects of AI into academic papers that could shape Google’s largest products, like search.

Gebru co-founded Black in AI, a group formed to encourage people of color to pursue careers in artificial intelligence research.

For Google, bringing on Gebru lent credibility to the tech giant’s efforts in examining how technology can exacerbate systemic bias and discrimination. Yet she says Google’s support for Gebru only went so far.

“They wanted to have my presence, but not me exactly. They wanted to have the idea of me being at Google, but not the reality of me being at Google,” Gebru said.

On Wednesday, several of her former colleagues wrote a letter to Google CEO Sundar Pichai asking that Gebru be reinstated, saying her departure has “had a demoralizing effect on the whole of our team.” The researchers also asked that they not be subject to retaliation for supporting Gebru.

That fear is not unfounded. Google has a history of demoting and firing dissenting employees.

In 2018, tens of thousands of employees walked off the job to protest how Google handled sexual harassment cases, among other issues. Organizers say the company pushed them out.

More recently, the National Labor Relations Board accused Google of breaking the law by sacking employees who tried to unionize.

“Google built this whole company up on the idea that we’ll give you free food and a free coffee and pay you well and give you comfortable bean bags to work on as long as you toe the company line,” said William Fitzgerald, who spent a decade at Google working on communications.

Google’s official company policy is: “if you see something that you think isn’t right – speak up!”

What the policy does not state, according to Fitzgerald, is that speaking up can also mean being shown the door.

“Anyone who continues to challenge their power will get squashed or pushed out, and this is something that’s been happening at Google for years now and we’re only now hearing about it,” he said.

Inside Google, women of color and other underrepresented groups who looked up to Gebru have been especially shaken, said former Google employee Ifeoma Ozoma.

“There are serious concerns around her identity as a Black woman and the concerns she raised around diversity as being the main driver for both the firing and the way it was done and the speed,” Ozoma said.

Google CEO Pichai wrote to staff that he is aware the episode has “seeded doubts and led some in our community to question their place at Google.” He apologized for that. And committed to fix it.

Google declined to be interviewed for this story. It points to emails in which executives say they vigorously support free thinking and independent research.

But now even that is up for debate. Before she left Google, the company abruptly asked Gebru to retract a research paper critical of Google’s technology.

Linguist Emily Bender at the University of Washington, who was one of her co-authors, said she feels for researchers inside Google right now.

“I can’t imagine that it wouldn’t have a chilling effect on people who are working there trying to work on this but now looking over their shoulder wondering, ‘When is something all of a sudden going to be retracted?’ and their work going to be basically taken away from them?” Bender said.

After Google demanded that Gebru retract the paper for not meeting the company’s bar for publication, Gebru asked that the process be explained to her, including a list of everyone who was part of the decision. If Google refused, Gebru said she would talk to her manager about “a last date.”

Google took that to mean Gebru offered to resign, and Google leadership say they accepted, but Gebru herself said no such offer was ever extended, only threatened.

Gebru learned that Google had let her go while she was on a vacation road trip across the country.

Former Googler Leslie Miley said he does not believe Google would have handled it the same way if Gebru were a white man.

“You fired a Black woman over her private email while she was on vacation,” Miley said. “This is how tech treats Black women and other underrepresented people.”

At Google, Gebru’s former team laid out in their letter to Pichai what is needed: “swift and structural changes if this work is to continue, and if the legitimacy of the field as a whole is to persevere.”

Source: Ousted Black Google Researcher: ‘They Wanted To Have My Presence, But Not Me Exactly’

Google CEO Apologizes, Vows To Restore Trust After Black Scientist’s Ouster

Doesn’t appear to be universally well received:

Google’s chief executive Sundar Pichai on Wednesday apologized in the aftermath of the dismissal of a prominent Black scientist whose ouster set off widespread condemnation from thousands of Google employees and outside researchers.

Timnit Gebru, who helped lead Google’s Ethical Artificial Intelligence team, said that she was fired last week after having a dispute over a research paper and sending a note to other Google employees criticizing the company for its treatment of people of color and women, particularly in hiring.

“I’ve heard the reaction to Dr. Gebru’s departure loud and clear: it seeded doubts and led some in our community to question their place at Google. I want to say how sorry I am for that, and I accept the responsibility of working to restore your trust,” Pichai wrote to Google employees on Wednesday, according to a copy of the email reviewed by NPR.

Since Gebru was pushed out, more than 2,000 Google employees have signed an open letter demanding answers, calling Gebru’s termination “research censorship” and a “retaliatory firing.”

In his letter, Pichai said the company is conducting a review of how Gebru’s dismissal was handled in order to determine whether there could have been a “more respectful process.”

Pichai went on to say that Google needs to accept responsible for a prominent Black female leader leaving Google on bad terms.

“This loss has had a ripple effect through some of our least represented communities, who saw themselves and some of their experiences reflected in Dr. Gebru’s. It was also keenly felt because Dr. Gebru is an expert in an important area of AI Ethics that we must continue to make progress on — progress that depends on our ability to ask ourselves challenging questions,” Pichai wrote.

Pichai said Google earlier this year committed to taking a look at all of the company’s systems for hiring and promoting employees to try to increase representation among Black workers and others underrepresented groups.

“The events of the last week are a painful but important reminder of the progress we still need to make,” Pichai wrote in his letter, which was earlier reported by Axios.

In a series of tweets, Gebru said she did not appreciate Pichai’s email to her former colleagues.

“Don’t paint me as an ‘angry Black woman’ for whom you need ‘de-escalation strategies’ for,” Gebru said.

“Finally it does not say ‘I’m sorry for what we did to her and it was wrong.’ What it DOES say is ‘it seeded doubts and led some in our community to question their place at Google.’ So I see this as ‘I’m sorry for how it played out but I’m not sorry for what we did to her yet,'” Gebru wrote.

One Google employee who requested anonymity for fear of retaliation said Pichai’s letter will do little to address the simmering strife among Googlers since Gebru’s firing.

The employee expressed frustration that Pichai did not directly apologize for Gebru’s termination and continued to suggest she was not fired by the company, which Gebru and many of her colleagues say is not true. The employees described Pichai’s letter as “meaningless PR.”

Source: Google CEO Apologizes, Vows To Restore Trust After Black Scientist’s Ouster

Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.

Of note:

A well-respected Google researcher said she was fired by the company after criticizing its approach to minority hiring and the biases built into today’s artificial intelligence systems.

Timnit Gebru, who was a co-leader of Google’s Ethical A.I. team, said in a tweet on Wednesday evening that she was fired because of an email she had sent a day earlier to a group that included company employees.

In the email, reviewed by The New York Times, she expressed exasperation over Google’s response to efforts by her and other employees to increase minority hiring and draw attention to bias in artificial intelligence.

“Your life starts getting worse when you start advocating for underrepresented people. You start making the other leaders upset,” the email read. “There is no way more documents or more conversations will achieve anything.”

Her departure from Google highlights growing tension between Google’s outspoken work force and its buttoned-up senior management, while raising concerns over the company’s efforts to build fair and reliable technology. It may also have a chilling effect on both Black tech workers and researchers who have left academia in recent years for high-paying jobs in Silicon Valley.

“Her firing only indicates that scientists, activists and scholars who want to work in this field — and are Black women — are not welcome in Silicon Valley,” said Mutale Nkonde, a fellow with the Stanford Digital Civil Society Lab. “It is very disappointing.”

A Google spokesman declined to comment. In an email sent to Google employees, Jeff Dean, who oversees Google’s A.I. work, including that of Dr. Gebru and her team, called her departure “a difficult moment, especially given the important research topics she was involved in, and how deeply we care about responsible A.I. research as an org and as a company.”

After years of an anything-goes environment where employees engaged in freewheeling discussions in companywide meetings and online message boards, Google has started to crack down on workplace discourse. Many Google employees have bristled at the new restrictions and have argued that the company has broken from a tradition of transparency and free debate.

On Wednesday, the National Labor Relations Board said Google had most likely violated labor law when it fired two employees who were involved in labor organizing. The federal agency said Google illegally surveilled the employees before firing them.

Google’s battles with its workers, who have spoken out in recent years about the company’s handling of sexual harassment and its work with the Defense Department and federal border agencies, have diminished its reputation as a utopia for tech workers with generous salaries, perks and workplace freedom.

Like other technology companies, Google has also faced criticism for not doing enough to resolve the lack of women and racial minorities among its ranks.

The problems of racial inequality, especially the mistreatment of Black employees at technology companies, has plagued Silicon Valley for years. Coinbase, the most valuable cryptocurrency start-up, has experienced an exodus of Black employees in the last two years over what the workers said was racist and discriminatory treatment.

Researchers worry that the people who are building artificial intelligence systems may be building their own biases into the technology. Over the past several years, several public experiments have shown that the systems often interact differently with people of color — perhaps because they are underrepresented among the developers who create those systems.

Dr. Gebru, 37, was born and raised in Ethiopia. In 2018, while a researcher at Stanford University, she helped write a paper that is widely seen as a turning point in efforts to pinpoint and remove bias in artificial intelligence. She joined Google later that year, and helped build the Ethical A.I. team.

After hiring researchers like Dr. Gebru, Google has painted itself as a company dedicated to “ethical” A.I. But it is often reluctant to publicly acknowledge flaws in its own systems.

In an interview with The Times, Dr. Gebru said her exasperation stemmed from the company’s treatment of a research paper she had written with six other researchers, four of them at Google. The paper, also reviewed by The Times, pinpointed flaws in a new breed of language technology, including a system built by Google that underpins the company’s search engine.

These systems learn the vagaries of language by analyzing enormous amounts of text, including thousands of books, Wikipedia entries and other online documents. Because this text includes biased and sometimes hateful language, the technology may end up generating biased and hateful language.

After she and the other researchers submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper from the conference or remove her name and the names of the other Google employees. She refused to do so without further discussion and, in the email sent Tuesday evening, said she would resign after an appropriate amount of time if the company could not explain why it wanted her to retract the paper and answer other concerns.

The company responded to her email, she said, by saying it could not meet her demands and that her resignation was accepted immediately. Her access to company email and other services was immediately revoked.

In his note to employees, Mr. Dean said Google respected “her decision to resign.” Mr. Dean also said that the paper did not acknowledge recent research showing ways of mitigating bias in such systems.

“It was dehumanizing,” Dr. Gebru said. “They may have reasons for shutting down our research. But what is most upsetting is that they refuse to have a discussion about why.”

Dr. Gebru’s departure from Google comes at a time when A.I. technology is playing a bigger role in nearly every facet of Google’s business. The company has hitched its future to artificial intelligence — whether with its voice-enabled digital assistant or its automated placement of advertising for marketers — as the breakthrough technology to make the next generation of services and devices smarter and more capable.

Sundar Pichai, chief executive of Alphabet, Google’s parent company, has compared the advent of artificial intelligence to that of electricity or fire, and has said that it is essential to the future of the company and computing. Earlier this year, Mr. Pichai called for greater regulation and responsible handling of artificial intelligence, arguing that society needs to balance potential harms with new opportunities.

Google has repeatedly committed to eliminating bias in its systems. The trouble, Dr. Gebru said, is that most of the people making the ultimate decisions are men. “They are not only failing to prioritize hiring more people from minority communities, they are quashing their voices,” she said.

Julien Cornebise, an honorary associate professor at University College London and a former researcher with DeepMind, a prominent A.I. lab owned by the same parent company as Google’s, was among many artificial intelligence researchers who said Dr. Gebru’s departure reflected a larger problem in the industry.

“This shows how some large tech companies only support ethics and fairness and other A.I.-for-social-good causes as long as their positive P.R. impact outweighs the extra scrutiny they bring,” he said. “Timnit is a brilliant researcher. We need more like her in our field.”

Source: https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html?action=click&module=Well&pgtype=Homepage&section=Business

A.I. Systems Echo Biases They’re Fed, Putting Scientists on Guard

Yet another article emphasizing the risks:

Last fall, Google unveiled a breakthrough artificial intelligence technology called BERT that changed the way scientists build systems that learn how people write and talk.

But BERT, which is now being deployed in services like Google’s internet search engine, has a problem: It could be picking up on biases in the way a child mimics the bad behavior of his parents.

BERT is one of a number of A.I. systems that learn from lots and lots of digitized information, as varied as old books, Wikipedia entries and news articles. Decades and even centuries of biases — along with a few new ones — are probably baked into all that material.

BERT and its peers are more likely to associate men with computer programming, for example, and generally don’t give women enough credit. One program decided almost everything written about President Trump was negative, even if the actual content was flattering.

Google Finds It’s Underpaying Many Men as It Addresses Wage Equity

Interesting but more nuanced than headline would suggest given hiring at different pay grades. Not sure if the Canadian public service has carried out this kind of detailed analysis (reader input welcome):

When Google conducted a study recently to determine whether the company was underpaying women and members of minority groups, it found, to the surprise of just about everyone, that men were paid less money than women for doing similar work.

The study, which disproportionately led to pay raises for thousands of men, is done every year, but the latest findings arrived as Google and other companies in Silicon Valley face increasing pressure to deal with gender issues in the workplace, from sexual harassment to wage discrimination.

Gender inequality is a radioactive topic at Google. The Labor Department is investigating whether the company systematically underpays women. It has been sued by former employees who claim they were paid less than men with the same qualifications. And last fall, thousands of Google employees protested the way the company handles sexual harassment claims against top executives.

Critics said the results of the pay study could give a false impression. Company officials acknowledged that it did not address whether women were hired at a lower pay grade than men with similar qualifications.

Google seems to be advancing a “flawed and incomplete sense of equality” by making sure men and women receive similar salaries for similar work, said Joelle Emerson, chief executive of Paradigm, a consulting company that advises companies on strategies for increasing diversity. That is not the same as addressing “equity,” she said, which would involve examining the structural hurdles that women face as engineers.

Google has denied paying women less, and the company agreed that compensation among similar job titles was not by itself a complete measure of equity. A more difficult issue to solve — one that critics say Google often mismanages for women — is a human resources concept called leveling. Are employees assigned to the appropriate pay grade for their qualifications?

The company said it was now trying to address the issue.

“Because leveling, performance ratings and promotion impact pay, this year we are undertaking a comprehensive review of these processes to make sure the outcomes are fair and equitable for all employees,” Lauren Barbato, Google’s lead analyst for pay equity, people analytics, wrote in a blog post made public on Monday.

To set an employee’s salary, Google starts with an algorithm using factors like performance, location and job. Next, managers can consider subjective factors: Do they believe the employee has a strong future with the company? Is he or she being paid on a par with peers who make similar contributions? Managers must provide a rationale for the decision.

While the pay bump is helpful, Google’s critics say it doesn’t come close to matching what a woman would make if she had been assigned to the appropriate pay grade in the first place.

Kelly Ellis, a former Google engineer and one of the plaintiffs in the gender-pay suit against the company, said in a legal filing that Google had hired her in 2010 as a Level 3 employee — the category for new software engineers who are recent college graduates — despite her four years of experience. Within a few weeks, a male engineer who had also graduated from college four years earlier was hired for Ms. Ellis’s team — as a Level 4 employee. That meant he received a higher salary and had more opportunities for bonuses, raises and stock compensation, according to the suit. Other men on the team whose qualifications were equal to or less than hers were also brought in at Level 4, the suit says.

The claim could become a class-action suit representing more than 8,300 current and former female employees.

The pay study covered 91 percent of Google’s employees and compared their compensation — salaries, bonuses and company stock — within specific job types, job levels, performance and location.

It was not possible to compare how racial minorities fared in terms of wage adjustments, Google said, because the United States is the only place where the global company tracks workers’ racial backgrounds.

In response to the study, Google gave $9.7 million in additional compensation to 10,677 employees for this year. Men account for about 69 percent of the company’s work force, but they received a higher percentage of the money. The exact number of men who got raises is unclear.

The company has done the study every year since 2012. At the end of 2017, it adjusted 228 employees’ salaries by a combined total of about $270,000. This year, new hires were included in the analysis for the first time, which Google said probably explained the big change in numbers.

Google’s work force, especially in leadership and high-paying technical roles, is overwhelmingly male and mostly white and Asian. Its efforts to increase diversity have touched off an internal culture war. In 2017, James Damore, a software engineer, wrote a widely circulated memo criticizing the company’s diversity programs. He argued that biological differences and not a lack of opportunity explained the shortage of women in upper-tier positions.

When Google fired Mr. Damore, conservatives argued that the company was dominated by people with liberal political and social views. Mr. Damore sued Google, claiming it is biased against white men with conservative views. The matter has been moved to private arbitration. Its status is unclear.

Google’s parent company, Alphabet, said it had 98,771 employees at the end of 2018. The company declined to provide the number of Google employees, but Google is by far the largest part of the company.

Google informed employees about the findings of its latest pay study in January at a meeting called to discuss a memo about cost-cutting proposals that had been leaked publicly. The proposals, reported earlier by Bloomberg, caused an uproar because they included ideas like slowing the pace at which Google promotes workers and eliminating some of its famous perks.

At the meeting, Sundar Pichai, Google’s chief executive, played down the proposals as the product of brainstorming by members of the human resources staff and not things that senior managers were seriously considering, according to a video viewed by The New York Times.

But in an effort to demonstrate that Google was not skimping on wages, executives said at the meeting that the company had adjusted the pay of more employees than ever before. Ms. Barbato, who presented the findings, said that more men were underpaid was a “surprising trend that we didn’t expect.”

Source: Google Finds It’s Underpaying Many Men as It Addresses Wage Equity

Here’s the Conversation We Really Need to Have About Bias at Google

Ongoing issue of bias in algorithms:

Let’s get this out of the way first: There is no basis for the charge that President Trump leveled against Google this week — that the search engine, for political reasons, favored anti-Trump news outlets in its results. None.

Mr. Trump also claimed that Google advertised President Barack Obama’s State of the Union addresses on its home page but did not highlight his own. That, too, was false, as screenshots show that Google did link to Mr. Trump’s address this year.

But that concludes the “defense of Google” portion of this column. Because whether he knew it or not, Mr. Trump’s false charges crashed into a longstanding set of worries about Google, its biases and its power. When you get beyond the president’s claims, you come upon a set of uncomfortable facts — uncomfortable for Google and for society, because they highlight how in thrall we are to this single company, and how few checks we have against the many unseen ways it is influencing global discourse.

In particular, a raft of research suggests there is another kind of bias to worry about at Google. The naked partisan bias that Mr. Trump alleges is unlikely to occur, but there is a potential problem for hidden, pervasive and often unintended bias — the sort that led Google to once return links to many pornographic pages for searches for “black girls,” that offered “angry” and “loud” as autocomplete suggestions for the phrase “why are black women so,” or that returned pictures of black people for searches of “gorilla.”

I culled these examples — which Google has apologized for and fixed, but variants of which keep popping up — from “Algorithms of Oppression: How Search Engines Reinforce Racism,” a book by Safiya U. Noble, a professor at the University of Southern California’s Annenberg School of Communication.

Dr. Noble argues that many people have the wrong idea about Google. We think of the search engine as a neutral oracle, as if the company somehow marshals computers and math to objectively sift truth from trash.

But Google is made by humans who have preferences, opinions and blind spots and who work within a corporate structure that has clear financial and political goals. What’s more, because Google’s systems are increasingly created by artificial intelligence tools that learn from real-world data, there’s a growing possibility that it will amplify the many biases found in society, even unbeknown to its creators.

Google says it is aware of the potential for certain kinds of bias in its search results, and that it has instituted efforts to prevent them. “What you have from us is an absolute commitment that we want to continually improve results and continually address these problems in an effective, scalable way,” said Pandu Nayak, who heads Google’s search ranking team. “We have not sat around ignoring these problems.”

For years, Dr. Noble and others who have researched hidden biases — as well as the many corporate critics of Google’s power, like the frequent antagonist Yelp — have tried to start a public discussion about how the search company influences speech and commerce online.

There’s a worry now that Mr. Trump’s incorrect charges could undermine such work. “I think Trump’s complaint undid a lot of good and sophisticated thought that was starting to work its way into public consciousness about these issues,” said Siva Vaidhyanathan, a professor of media studies at the University of Virginia who has studied Google and Facebook’s influence on society.

Dr. Noble suggested a more constructive conversation was the one “about one monopolistic platform controlling the information landscape.”

So, let’s have it.

Google’s most important decisions are secret

In the United States, about eight out of 10 web searches are conducted through Google; across Europe, South America and India, Google’s share is even higher. Google also owns other major communications platforms, among them YouTube and Gmail, and it makes the Android operating system and its app store. It is the world’s dominant internet advertising company, and through that business, it also shapes the market for digital news.

Google’s power alone is not damning. The important question is how it manages that power, and what checks we have on it. That’s where critics say it falls down.

Google’s influence on public discourse happens primarily through algorithms, chief among them the system that determines which results you see in its search engine. These algorithms are secret, which Google says is necessary because search is its golden goose (it does not want Microsoft’s Bing to know what makes Google so great) and because explaining the precise ways the algorithms work would leave them open to being manipulated.

But this initial secrecy creates a troubling opacity. Because search engines take into account the time, place and some personalized factors when you search, the results you get today will not necessarily match the results I get tomorrow. This makes it difficult for outsiders to investigate bias across Google’s results.

A lot of people made fun this week of the paucity of evidence that Mr. Trump put forward to support his claim. But researchers point out that if Google somehow went rogue and decided to throw an election to a favored candidate, it would only have to alter a small fraction of search results to do so. If the public did spot evidence of such an event, it would look thin and inconclusive, too.

“We really have to have a much more sophisticated sense of how to investigate and identify these claims,” said Frank Pasquale, a professor at the University of Maryland’s law school who has studied the role that algorithms play in society.

In a law review article published in 2010, Mr. Pasquale outlined a way for regulatory agencies like the Federal Trade Commission and the Federal Communications Commission to gain access to search data to monitor and investigate claims of bias. No one has taken up that idea. Facebook, which also shapes global discourse through secret algorithms, recently sketched out a plan to give academic researchers access to its data to investigate bias, among other issues.

Google has no similar program, but Dr. Nayak said the company often shares data with outside researchers. He also argued that Google’s results are less “personalized” than people think, suggesting that search biases, when they come up, will be easy to spot.

“All our work is out there in the open — anyone can evaluate it, including our critics,” he said.

Search biases mirror real-world ones

The kind of blanket, intentional bias Mr. Trump is claiming would necessarily involve many workers at Google. And Google is leaky; on hot-button issues — debates over diversity or whether to work with the military — politically minded employees have provided important information to the media. If there was even a rumor that Google’s search team was skewing search for political ends, we would likely see some evidence of such a conspiracy in the media.

That’s why, in the view of researchers who study the issue of algorithmic bias, the more pressing concern is not about Google’s deliberate bias against one or another major political party, but about the potential for bias against those who do not already hold power in society. These people — women, minorities and others who lack economic, social and political clout — fall into the blind spots of companies run by wealthy men in California.

It’s in these blind spots that we find the most problematic biases with Google, like in the way it once suggested a spelling correction for the search “English major who taught herself calculus” — the correct spelling, Google offered, was “English major who taught himself calculus.”

Why did it do that? Google’s explanation was not at all comforting: The phrase “taught himself calculus” is a lot more popular online than “taught herself calculus,” so Google’s computers assumed that it was correct. In other words, a longstanding structural bias in society was replicated on the web, which was reflected in Google’s algorithm, which then hung out live online for who knows how long, unknown to anyone at Google, subtly undermining every female English major who wanted to teach herself calculus.

Eventually, this error was fixed. But how many other such errors are hidden in Google? We have no idea.

Google says it understands these worries, and often addresses them. In 2016, some people noticed that it listed a Holocaust-denial site as a top result for the search “Did the Holocaust happen?” That started a large effort at the company to address hate speech and misinformation online. The effort, Dr. Nayak said, shows that “when we see real-world biases making results worse than they should be, we try to get to the heart of the problem.”

Google has escaped recent scrutiny

Yet it is not just these unintended biases that we should be worried about. Researchers point to other issues: Google’s algorithms favor recency and activity, which is why they are so often vulnerable to being manipulated in favor of misinformation and rumor in the aftermath of major news events. (Google says it is working on addressing misinformation.)

Some of Google’s rivals charge that the company favors its own properties in its search results over those of third-party sites — for instance, how it highlights Google’s local reviews instead of Yelp’s in response to local search queries.

Regulators in Europe have already fined Google for this sort of search bias. In 2012, the F.T.C.’s antitrust investigators found credible evidence of unfair search practices at Google. The F.T.C.’s commissioners, however, voted unanimously against bringing charges. Google denies any wrongdoing.

The danger for Google is that Mr. Trump’s charges, however misinformed, create an opening to discuss these legitimate issues.

On Thursday, Senator Orrin Hatch, Republican of Utah, called for the F.T.C. to reopen its Google investigation. There is likely more to come. For the last few years, Facebook has weathered much of society’s skepticism regarding big tech. Now, it may be Google’s time in the spotlight.

Source: Here’s the Conversation We Really Need to Have About Bias at …

Google Is Trying Too Hard (or Not Hard Enough) to Diversify – The New York Times

Interesting internal debates and struggles within Google (and likely not unique to Google):

In 2014, Google became one of the first technology companies to release a race and gender breakdown of its work force. It revealed — to no one’s surprise — that its staff was largely white or Asian and decidedly male.

The company explained that it disclosed the figures, in part, because it wanted to be held accountable publicly for not looking “the way we wanted to.

Since then, Google has made modest progress in its plan to create a more diverse work force, with the percentage of women at the company ticking up a bit. But a spate of recent incidents and lawsuits highlight the challenges the company has faced as it has been dragged into a national discussion regarding politics, race and gender in the workplace.

Google is being sued by former employees for going too far with its diversity effort. It is also being sued for not going far enough.

“My impression is that Google is not sure what to do,” said Michelle Miller, a co-executive director at Coworker.org, a workers’ rights organization that has been working with some Google employees. “It prevents the ability of a company to function when one group of workers is obstinately focused on defeating their co-workers with whatever it takes.”

The division within Google spilled into the open last year when James Damore, a software engineer, wrote a memo critical of its diversity programs. He argued that biological differences and not a lack of opportunity explained the shortage of women in leadership and technical positions.

Google fired Mr. Damore. He filed a lawsuit in January with another former employee, claiming that the company discriminates against white men with conservative views. In a separate lawsuit, a former recruiter for YouTube sued Google because, he said, he was fired for resisting a mandate to hire only diverse — female or black and Latino — candidates.

Google’s handling of the issue was also upsetting to Mr. Damore’s critics. In another lawsuit filed last month, a former Google employee said he was fired because he was too outspoken in advocating diversity and for spending too much time on “social activism.”

Inside Google, vocal diversity proponents say they are the targets of a small group of employees who are sympathetic to Mr. Damore. In some cases, screenshots of comments made on an internal social network were leaked to online forums frequented by right-wing groups, which searched for and published personal information like home addresses and phone numbers of the Google employees, they said.

In 2015, Google started an internal program called Respect@, which includes a way for employees to anonymously report complaints of inappropriate behavior by co-workers. Some diversity supporters say other employees are taking advantage of this program to accuse them of harassment for out-of-context statements.

“Some people feel threatened by movements that promote diversity and inclusion. They think it means people are going to come for their jobs,” said Liz Fong-Jones, a Google engineer who is a vocal supporter of diversity.

Many big tech companies are struggling with the challenge of creating a more diverse work force. In 2015, Facebook adopted the so-called Rooney Rule. Originally used by the National Football League to prod teams to consider coaching prospects who are black, the rule requires managers to interview candidates from underrepresented backgrounds for open positions. But last year, Facebook’s female engineers said that gender bias was still a problem and that their work received more scrutiny than men’s work.

Even executives tasked with promoting diversity have had difficulties. In October, Denise Young Smith, who was Apple’s vice president of inclusion and diversity, came under fire when she said that there was diversity even among 12 white, blue-eyed, blond men because they had different backgrounds and experiences. She later apologized, saying she did not intend to play down the importance of a non-homogenous work force. She left Apple in December.

The tension is elevated at Google, at least in part, by its workplace culture. Google has encouraged employees to express themselves and challenge one another. It provides many communication systems for people to discuss work and nonwork related issues. Even topics considered out of bounds at other workplaces — like sharp criticism of its own products — are discussed openly and celebrated.

In January, on one of Google’s 90,000 “groups” — internal email lists around a discussion topic — an employee urged colleagues to donate money to help pay Mr. Damore’s legal fees from his lawsuit against Google to promote “viewpoint diversity,” according to a person who saw the posting but is not permitted to share the information publicly.

Last month, Tim Chevalier, who had worked at Google as an engineer until November, sued for wrongful termination, claiming that he was fired “because of his political statements in opposition to the discrimination, harassment and white supremacy he saw being expressed on Google’s internal messaging systems.” He said one employee had suggested that there was a shortage of black and Latino employees at Google because they were “not as good.”

Mr. Chevalier said he had been fired shortly after saying that Republicans were “welcome to leave” if they did not feel comfortable with Google’s policies. He said he had meant that being a Republican did not exempt Google employees from following the company’s code of conduct.

A Google spokeswoman said in a statement that the company encouraged lively debate. But there are limits.

“Creating a more diverse workplace is a big challenge and a priority we’ve been working to address. Some people won’t agree with our approach, and they’re free to express their disagreement,” said the spokeswoman, Gina Scigliano. “But some conduct and discussion in the workplace crosses a line, and we don’t tolerate it. We enforce strong policies, and work with affected employees, to ensure everyone can do their work free of harassment, discrimination and bullying.”

In the past, discussions about diversity in Google’s online chat groups would encounter skeptical but subtle comments or questions. The debate turned openly antagonistic after Mr. Damore’s memo, which was titled “Google’s Ideological Echo Chamber.”

“The James Damore thing brought everything to a head,” said Vicki Holland, a linguist who has worked at Google for seven years. “It brought everything to the surface where everyone could see it.”

Mr. Damore said he began to question Google’s diversity policies at a weekly company meeting last March. At the meeting, Ruth Porat, the chief financial officer of Google’s parent company, Alphabet, and Eileen Naughton, Google’s vice president of people operations, “pointed out and shamed” departments in which women accounted for less than half the staff, according to Mr. Damore’s lawsuit.

The two female executives — who are among the company’s highest-ranking women — said Google’s “racial and gender preferences were not up for debate,” according to the lawsuit. Mr. Damore subsequently attended a “Diversity and Inclusion Summit,” where it reinforced his view that Google was “elevating political correctness over merit” with its diversity measures.

Mr. Damore said he had written his memo afterward in response.

Ms. Scigliano, the Google spokeswoman, said the company looked forward to fighting Mr. Damore’s lawsuit in court. Sundar Pichai, Google’s chief executive, said in an August blog post that he had fired Mr. Damore because his memo advanced “harmful gender stereotypes” but that “much of the memo is fair to debate.”

Some employees said they were abstaining from internal debate on sensitive issues because they worried that their comments might be misconstrued or used against them. Like the broader internet, the conversations tend to be dominated by the loudest voices, they said.

Google’s diversity advocates said they would like to see more moderation on internal forums with officials stepping in to defuse tensions before conversations get out of hand. Ms. Miller, the Coworker.org co-director, said Google employees had expressed concern about how this would affect an internal culture rooted in transparency and free expression.

“What’s on everyone’s mind is: Has the culture been inextricably damaged by this environment?” she said.

via Google Is Trying Too Hard (or Not Hard Enough) to Diversify – The New York Times

YouTube, the Great Radicalizer – The New York Times

Good article on how social media reinforces echo chambers and tends towards more extreme views:

At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations.

Soon I noticed something peculiar. YouTube started to recommend and “autoplay” videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.

Since I was not in the habit of watching extreme right-wing fare on YouTube, I was curious whether this was an exclusively right-wing phenomenon. So I created another YouTube account and started watching videos of Hillary Clinton and Bernie Sanders, letting YouTube’s recommender algorithm take me wherever it would.

Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11. As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with.

Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.

Is this suspicion correct? Good data is hard to come by; Google is loath to share information with independent researchers. But we now have the first inklings of confirmation, thanks in part to a former Google engineer named Guillaume Chaslot.

Mr. Chaslot worked on the recommender algorithm while at YouTube. He grew alarmed at the tactics used to increase the time people spent on the site. Google fired him in 2013, citing his job performance. He maintains the real reason was that he pushed too hard for changes in how the company handles such issues.

The Wall Street Journal conducted an investigationof YouTube content with the help of Mr. Chaslot. It found that YouTube often “fed far-right or far-left videos to users who watched relatively mainstream news sources,” and that such extremist tendencies were evident with a wide variety of material. If you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos.

It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content. In the run-up to the 2016 election, Mr. Chaslot created a program to keep track of YouTube’s most recommended videos as well as its patterns of recommendations. He discovered that whether you started with a pro-Clinton or pro-Trump video on YouTube, you were many times more likely to end up with a pro-Trump video recommended.

Combine this finding with other research showing that during the 2016 campaign, fake news, which tends toward the outrageous, included much more pro-Trump than pro-Clinton content, and YouTube’s tendency toward the incendiary seems evident.

YouTube has recently come under fire for recommending videos promoting the conspiracy theory that the outspoken survivors of the school shooting in Parkland, Fla., are “crisis actors” masquerading as victims. Jonathan Albright, a researcher at Columbia, recently “seeded” a YouTube account with a search for “crisis actor” and found that following the “up next” recommendations led to a network of some 9,000 videos promoting that and related conspiracy theories, including the claim that the 2012 school shooting in Newtown, Conn., was a hoax.

What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

Human beings have many natural tendencies that need to be vigilantly monitored in the context of modern life. For example, our craving for fat, salt and sugar, which served us well when food was scarce, can lead us astray in an environment in which fat, salt and sugar are all too plentiful and heavily marketed to us. So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.

In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.

This situation is especially dangerous given how many people — especially young people — turn to YouTube for information. Google’s cheap and sturdy Chromebook laptops, which now make up more than 50 percent of the pre-college laptop education market in the United States, typically come loaded with ready access to YouTube.

This state of affairs is unacceptable but not inevitable. There is no reason to let a company make so much money while potentially helping to radicalize billions of people, reaping the financial benefits while asking society to bear so many of the costs.

via YouTube, the Great Radicalizer – The New York Times