Roose: We Need to Talk About How Good A.I. Is Getting

Of note. World is going to become more complex, and the potential for AI in many fields will continue to grow, with these tools and programs being increasing able to replace, at least in part, professionals including government workers:

For the past few days, I’ve been playing around with DALL-E 2, an app developed by the San Francisco company OpenAI that turns text descriptions into hyper-realistic images.

OpenAI invited me to test DALL-E 2 (the name is a play on Pixar’s WALL-E and the artist Salvador Dalí) during its beta period, and I quickly got obsessed. I spent hours thinking up weird, funny and abstract prompts to feed the A.I. — “a 3-D rendering of a suburban home shaped like a croissant,” “an 1850s daguerreotype portrait of Kermit the Frog,” “a charcoal sketch of two penguins drinking wine in a Parisian bistro.” Within seconds, DALL-E 2 would spit out a handful of images depicting my request — often with jaw-dropping realism.

Here, for example, is one of the images DALL-E 2 produced when I typed in “black-and-white vintage photograph of a 1920s mobster taking a selfie.” And how it rendered my request for a high-quality photograph of “a sailboat knitted out of blue yarn.”

DALL-E 2 can also go more abstract. The illustration at the top of this article, for example, is what it generated when I asked for a rendering of “infinite joy.” (I liked this one so much I’m going to have it printed and framed for my wall.)

What’s impressive about DALL-E 2 isn’t just the art it generates. It’s how it generates art. These aren’t composites made out of existing internet images — they’re wholly new creations made through a complex A.I. process known as “diffusion,” which starts with a random series of pixels and refines it repeatedly until it matches a given text description. And it’s improving quickly — DALL-E 2’s images are four times as detailed as the images generated by the original DALL-E, which was introduced only last year.

DALL-E 2 got a lot of attention when it was announced this year, and rightfully so. It’s an impressive piece of technology with big implications for anyone who makes a living working with images — illustrators, graphic designers, photographers and so on. It also raises important questions about what all of this A.I.-generated art will be used for, and whether we need to worry about a surge in synthetic propaganda, hyper-realistic deepfakes or even nonconsensual pornography.

But art is not the only area where artificial intelligence has been making major strides.

Over the past 10 years — a period some A.I. researchers have begun referring to as a “golden decade” — there’s been a wave of progress in many areas of A.I. research, fueled by the rise of techniques like deep learning and the advent of specialized hardware for running huge, computationally intensive A.I. models.

Some of that progress has been slow and steady — bigger models with more data and processing power behind them yielding slightly better results.

But other times, it feels more like the flick of a switch — impossible acts of magic suddenly becoming possible.

Just five years ago, for example, the biggest story in the A.I. world was AlphaGo, a deep learning model built by Google’s DeepMind that could beat the best humans in the world at the board game Go. Training an A.I. to win Go tournaments was a fun party trick, but it wasn’t exactly the kind of progress most people care about.

But last year, DeepMind’s AlphaFold — an A.I. system descended from the Go-playing one — did something truly profound. Using a deep neural network trained to predict the three-dimensional structures of proteins from their one-dimensional amino acid sequences, it essentially solved what’s known as the “protein-folding problem,” which had vexed molecular biologists for decades.

This summer, DeepMind announced that AlphaFold had made predictions for nearly all of the 200 million proteins known to exist — producing a treasure trove of data that will help medical researchers develop new drugs and vaccines for years to come. Last year, the journal Science recognized AlphaFold’s importance, naming it the biggest scientific breakthrough of the year.

Or look at what’s happening with A.I.-generated text.

Only a few years ago, A.I. chatbots struggled even with rudimentary conversations — to say nothing of more difficult language-based tasks.

But now, large language models like OpenAI’s GPT-3 are being used to write screenplayscompose marketing emails and develop video games. (I even used GPT-3 to write a book review for this paper last year — and, had I not clued in my editors beforehand, I doubt they would have suspected anything.)

A.I. is writing code, too — more than a million people have signed up to use GitHub’s Copilot, a tool released last year that helps programmers work faster by automatically finishing their code snippets.

Then there’s Google’s LaMDA, an A.I. model that made headlines a couple of months ago when Blake Lemoine, a senior Google engineer, was fired after claiming that it had become sentient.

Google disputed Mr. Lemoine’s claims, and lots of A.I. researchers have quibbled with his conclusions. But take out the sentience part, and a weaker version of his argument — that LaMDA and other state-of-the-art language models are becoming eerily good at having humanlike text conversations — would not have raised nearly as many eyebrows.

In fact, many experts will tell you that A.I. is getting better at lots of things these days — even in areas, such as language and reasoning, where it once seemed that humans had the upper hand.

“It feels like we’re going from spring to summer,” said Jack Clark, a co-chair of Stanford University’s annual A.I. Index Report. “In spring, you have these vague suggestions of progress, and little green shoots everywhere. Now, everything’s in bloom.”

In the past, A.I. progress was mostly obvious only to insiders who kept up with the latest research papers and conference presentations. But recently, Mr. Clark said, even laypeople can sense the difference.

“You used to look at A.I.-generated language and say, ‘Wow, it kind of wrote a sentence,’” Mr. Clark said. “And now you’re looking at stuff that’s A.I.-generated and saying, ‘This is really funny, I’m enjoying reading this,’ or ‘I had no idea this was even generated by A.I.’”

There is still plenty of bad, broken A.I. out there, from racist chatbots to faulty automated driving systems that result in crashes and injury. And even when A.I. improves quickly, it often takes a while to filter down into products and services that people actually use. An A.I. breakthrough at Google or OpenAI today doesn’t mean that your Roomba will be able to write novels tomorrow.

But the best A.I. systems are now so capable — and improving at such fast rates — that the conversation in Silicon Valley is starting to shift. Fewer experts are confidently predicting that we have years or even decades to prepare for a wave of world-changing A.I.; many now believe that major changes are right around the corner, for better or worse.

Ajeya Cotra, a senior analyst with Open Philanthropy who studies A.I. risk, estimated two years ago that there was a 15 percent chance of “transformational A.I.” — which she and others have defined as A.I. that is good enough to usher in large-scale economic and societal changes, such as eliminating most white-collar knowledge jobs — emerging by 2036.

But in a recent post, Ms. Cotra raised that to a 35 percent chance, citing the rapid improvement of systems like GPT-3.

“A.I. systems can go from adorable and useless toys to very powerful products in a surprisingly short period of time,” Ms. Cotra told me. “People should take more seriously that A.I. could change things soon, and that could be really scary.”

There are, to be fair, plenty of skeptics who say claims of A.I. progress are overblown. They’ll tell you that A.I. is still nowhere close to becoming sentient, or replacing humans in a wide variety of jobs. They’ll say that models like GPT-3 and LaMDA are just glorified parrots, blindly regurgitating their training data, and that we’re still decades away from creating true A.G.I. — artificial general intelligence — that is capable of “thinking” for itself.

There are also tech optimists who believe that A.I. progress is accelerating, and who want it to accelerate faster. Speeding A.I.’s rate of improvement, they believe, will give us new tools to cure diseases, colonize space and avert ecological disaster.

I’m not asking you to take a side in this debate. All I’m saying is: You should be paying closer attention to the real, tangible developments that are fueling it.

After all, A.I. that works doesn’t stay in a lab. It gets built into the social media apps we use every day, in the form of Facebook feed-ranking algorithms, YouTube recommendations and TikTok “For You” pages. It makes its way into weapons used by the military and software used by children in their classrooms. Banks use A.I. to determine who’s eligible for loans, and police departments use it to investigate crimes.

Even if the skeptics are right, and A.I. doesn’t achieve human-level sentience for many years, it’s easy to see how systems like GPT-3, LaMDA and DALL-E 2 could become a powerful force in society. In a few years, the vast majority of the photos, videos and text we encounter on the internet could be A.I.-generated. Our online interactions could become stranger and more fraught, as we struggle to figure out which of our conversational partners are human and which are convincing bots. And tech-savvy propagandists could use the technology to churn out targeted misinformation on a vast scale, distorting the political process in ways we won’t see coming.

It’s a cliché, in the A.I. world, to say things like “we need to have a societal conversation about A.I. risk.” There are already plenty of Davos panels, TED talks, think tanks and A.I. ethics committees out there, sketching out contingency plans for a dystopian future.

What’s missing is a shared, value-neutral way of talking about what today’s A.I. systems are actually capable of doing, and what specific risks and opportunities those capabilities present.

I think three things could help here.

First, regulators and politicians need to get up to speed.

Because of how new many of these A.I. systems are, few public officials have any firsthand experience with tools like GPT-3 or DALL-E 2, nor do they grasp how quickly progress is happening at the A.I. frontier.

We’ve seen a few efforts to close the gap — Stanford’s Institute for Human-Centered Artificial Intelligence recently held a three-day “A.I. boot camp” for congressional staff members, for example — but we need more politicians and regulators to take an interest in the technology. (And I don’t mean that they need to start stoking fears of an A.I. apocalypse, Andrew Yang-style. Even reading a book like Brian Christian’s “The Alignment Problem” or understanding a few basic details about how a model like GPT-3 works would represent enormous progress.)

Otherwise, we could end up with a repeat of what happened with social media companies after the 2016 election — a collision of Silicon Valley power and Washington ignorance, which resulted in nothing but gridlock and testy hearings.

Second, big tech companies investing billions in A.I. development — the Googles, Metas and OpenAIs of the world — need to do a better job of explaining what they’re working on, without sugarcoating or soft-pedaling the risks. Right now, many of the biggest A.I. models are developed behind closed doors, using private data sets and tested only by internal teams. When information about them is made public, it’s often either watered down by corporate P.R. or buried in inscrutable scientific papers.

Downplaying A.I. risks to avoid backlash may be a smart short-term strategy, but tech companies won’t survive long term if they’re seen as having a hidden A.I. agenda that’s at odds with the public interest. And if these companies won’t open up voluntarily, A.I. engineers should go around their bosses and talk directly to policymakers and journalists themselves.

Third, the news media needs to do a better job of explaining A.I. progress to nonexperts. Too often, journalists — and I admit I’ve been a guilty party here — rely on outdated sci-fi shorthand to translate what’s happening in A.I. to a general audience. We sometimes compare large language models to Skynet and HAL 9000, and flatten promising machine learning breakthroughs to panicky “The robots are coming!” headlines that we think will resonate with readers. Occasionally, we betray our ignorance by illustrating articles about software-based A.I. models with photos of hardware-based factory robots — an error that is as inexplicable as slapping a photo of a BMW on a story about bicycles.

In a broad sense, most people think about A.I. narrowly as it relates to us — Will it take my job? Is it better or worse than me at Skill X or Task Y? — rather than trying to understand all of the ways A.I. is evolving, and what that might mean for our future.

I’ll do my part, by writing about A.I. in all its complexity and weirdness without resorting to hyperbole or Hollywood tropes. But we all need to start adjusting our mental models to make space for the new, incredible machines in our midst.

Source: We Need to Talk About How Good A.I. Is Getting

The Hidden Automation Agenda of the Davos Elite

Clarity and likely greater impact on the labour force needed and immigration levels:

They’ll never admit it in public, but many of your bosses want machines to replace you as soon as possible.

I know this because, for the past week, I’ve been mingling with corporate executives at the World Economic Forum’s annual meeting in Davos. And I’ve noticed that their answers to questions about automation depend very much on who is listening.

In public, many executives wring their hands over the negative consequences that artificial intelligence and automation could have for workers. They take part in panel discussions about building “human-centered A.I.” for the “Fourth Industrial Revolution” — Davos-speak for the corporate adoption of machine learning and other advanced technology — and talk about the need to provide a safety net for people who lose their jobs as a result of automation.

But in private settings, including meetings with the leaders of the many consulting and technology firms whose pop-up storefronts line the Davos Promenade, these executives tell a different story: They are racing to automate their own work forces to stay ahead of the competition, with little regard for the impact on workers.

All over the world, executives are spending billions of dollars to transform their businesses into lean, digitized, highly automated operations. They crave the fat profit margins automation can deliver, and they see A.I. as a golden ticket to savings, perhaps by letting them whittle departments with thousands of workers down to just a few dozen.

“People are looking to achieve very big numbers,” said Mohit Joshi, the president of Infosys, a technology and consulting firm that helps other businesses automate their operations. “Earlier they had incremental, 5 to 10 percent goals in reducing their work force. Now they’re saying, ‘Why can’t we do it with 1 percent of the people we have?’”

Few American executives will admit wanting to get rid of human workers, a taboo in today’s age of inequality. So they’ve come up with a long list of buzzwords and euphemisms to disguise their intent. Workers aren’t being replaced by machines, they’re being “released” from onerous, repetitive tasks. Companies aren’t laying off workers, they’re “undergoing digital transformation.”

A 2017 survey by Deloitte found that 53 percent of companies had already started to use machines to perform tasks previously done by humans. The figure is expected to climb to 72 percent by next year.

The corporate elite’s A.I. obsession has been lucrative for firms that specialize in “robotic process automation,” or R.P.A. Infosys, which is based in India, reported a 33 percent increase in year-over-year revenue in its digital division. IBM’s “cognitive solutions” unit, which uses A.I. to help businesses increase efficiency, has become the company’s second-largest division, posting $5.5 billion in revenue last quarter. The investment bank UBS projects that the artificial intelligence industry could be worth as much as $180 billion by next year.

Kai-Fu Lee, the author of “AI Superpowers” and a longtime technology executive, predicts that artificial intelligence will eliminate 40 percent of the world’s jobs within 15 years. In an interview, he said that chief executives were under enormous pressure from shareholders and boards to maximize short-term profits, and that the rapid shift toward automation was the inevitable result.

The Milwaukee offices of the Taiwanese electronics maker Foxconn, whose chairman has said he plans to replace 80 percent of the company’s workers with robots in five to 10 years.CreditLauren Justice for The New York Times

“They always say it’s more than the stock price,” he said. “But in the end, if you screw up, you get fired.”

Other experts have predicted that A.I. will create more new jobs than it destroys, and that job losses caused by automation will probably not be catastrophic. They point out that some automation helps workers by improving productivity and freeing them to focus on creative tasks over routine ones.

But at a time of political unrest and anti-elite movements on the progressive left and the nationalist right, it’s probably not surprising that all of this automation is happening quietly, out of public view. In Davos this week, several executives declined to say how much money they had saved by automating jobs previously done by humans. And none were willing to say publicly that replacing human workers is their ultimate goal.

“That’s the great dichotomy,” said Ben Pring, the director of the Center for the Future of Work at Cognizant, a technology services firm. “On one hand,” he said, profit-minded executives “absolutely want to automate as much as they can.”

“On the other hand,” he added, “they’re facing a backlash in civic society.”

For an unvarnished view of how some American leaders talk about automation in private, you have to listen to their counterparts in Asia, who often make no attempt to hide their aims. Terry Gou, the chairman of the Taiwanese electronics manufacturer Foxconn, has said the company plans to replace 80 percent of its workers with robots in the next five to 10 years. Richard Liu, the founder of the Chinese e-commerce company JD.com, said at a business conferencelast year that “I hope my company would be 100 percent automation someday.”

One common argument made by executives is that workers whose jobs are eliminated by automation can be “reskilled” to perform other jobs in an organization. They offer examples like Accenture, which claimed in 2017 to have replaced 17,000 back-office processing jobs without layoffs, by training employees to work elsewhere in the company. In a letter to shareholders last year, Jeff Bezos, Amazon’s chief executive, said that more than 16,000 Amazon warehouse workers had received training in high-demand fields like nursing and aircraft mechanics, with the company covering 95 percent of their expenses.

But these programs may be the exception that proves the rule. There are plenty of stories of successful reskilling — optimists often cite a program in Kentucky that trained a small group of former coal miners to become computer programmers — but there is little evidence that it works at scale. A report by the World Economic Forum this month estimated that of the 1.37 million workers who are projected to be fully displaced by automation in the next decade, only one in four can be profitably reskilled by private-sector programs. The rest, presumably, will need to fend for themselves or rely on government assistance.

In Davos, executives tend to speak about automation as a natural phenomenon over which they have no control, like hurricanes or heat waves. They claim that if they don’t automate jobs as quickly as possible, their competitors will.

“They will be disrupted if they don’t,” said Katy George, a senior partner at the consulting firm McKinsey & Company.

Automating work is a choice, of course, one made harder by the demands of shareholders, but it is still a choice. And even if some degree of unemployment caused by automation is inevitable, these executives can choose how the gains from automation and A.I. are distributed, and whether to give the excess profits they reap as a result to workers, or hoard it for themselves and their shareholders.

The choices made by the Davos elite — and the pressure applied on them to act in workers’ interests rather than their own — will determine whether A.I. is used as a tool for increasing productivity or for inflicting pain.

“The choice isn’t between automation and non-automation,” said Erik Brynjolfsson, the director of M.I.T.’s Initiative on the Digital Economy. “It’s between whether you use the technology in a way that creates shared prosperity, or more concentration of wealth.”