Friedman: Our New Promethean Moment

Friedman is always interesting as to where future conversations and emerging issues are headed:

I had a most remarkable but unsettling experience last week. Craig Mundie, the former chief research and strategy officer for Microsoft, was giving me a demonstration of GPT-4, the most advanced version of the artificial intelligence chatbot ChatGPT, developed by OpenAI and launched in November. Craig was preparing to brief the board of my wife’s museum, Planet Word, of which he is a member, about the effect ChatGPT will have on words, language and innovation.

“You need to understand,” Craig warned me before he started his demo, “this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”

Large language modules like ChatGPT will steadily increase in their capabilities, Craig added, and take us “toward a form of artificial general intelligence,” delivering efficiencies in operations, ideas, discoveries and insights “that have never been attainable before across every domain.”

Then he did a demonstration. And I realized Craig’s words were an understatement.

First, he asked GPT-4 — for which Craig was a selected advanced tester and which was just released to the public — to summarize Planet Word and its mission in 400 words. It did so perfectly — in a few seconds.

Then he asked it to do the same in 200 words. Another few seconds.

Then he asked it to do the same in Arabic. Just as quickly. Then in Mandarin. Two more seconds. Then in English again — but in the form of a Shakespearean sonnet. A few more seconds.

Then Craig asked GPT-4 to write the same description in an abecedarian verse — where the first line begins with the letter A, the second with B and so on through the alphabet. It did it with stunning creativity, beginning:

Alluring in Washington, is a museum so grand,
Built to teach, inspire, and help us understand.
Curious minds Planet flock to Word’s embrace,
Delving into language and its intricate grace
Every exhibit here has a story to tell,
From the origins of speech to the art of the quill.

And so on, through Z.

I could barely sleep that night. To observe an A.I. system — its software, microchips and connectivity — produce that level of originality in multiple languages in just seconds each time, well, the first thing that came to mind was the observation by the science fiction writer Arthur C. Clarke that “any sufficiently advanced technology is indistinguishable from magic.”

The second thing that came to mind was a moment at the start of “The Wizard of Oz” — the tornado scene where everything and everyone are lifted into a swirling gyre, including Dorothy and Toto, and then swept away from mundane, black and white Kansas to the gleaming futuristic Land of Oz, where everything is in color.

We are about to be hit by such a tornado. This is a Promethean moment we’ve entered — one of those moments in history when certain new tools, ways of thinking or energy sources are introduced that are such a departure and advance on what existed before that you can’t just change one thing, you have to change everything. That is, how you create, how you compete, how you collaborate, how you work, how you learn, how you govern and, yes, how you cheat, commit crimes and fight wars.

We know the key Promethean eras of the last 600 years: the invention of the printing press, the scientific revolution, the agricultural revolution combined with the industrial revolution, the nuclear power revolution, personal computing and the internet and … now this moment.

Only this Promethean moment is not driven by a single invention, like a printing press or a steam engine, but rather by a technology super-cycle. It is our ability to sense, digitize, process, learn, share and act, all increasingly with the help of A.I. That loop is being put into everything — from your car to your fridge to your smartphone to fighter jets — and it’s driving more and more processes every day.

It’s why I call our Promethean era “The Age of Acceleration, Amplification and Democratization.” Never have more humans had access to more cheap tools that amplify their power at a steadily accelerating rate — while being diffused into the personal and working lives of more and more people all at once. And it’s happening faster than most anyone anticipated.

The potential to use these tools to solve seemingly impossible problems — from human biology to fusion energy to climate change — is awe-inspiring. Consider just one example that most people probably haven’t even heard of — the way DeepMind, an A.I. lab owned by Google parent Alphabet, recently used its AlphaFold A.I. system to solve one of the most wicked problems in science — at a speed and scope that was stunning to the scientists who had spent their careers slowly, painstakingly creeping closer to a solution.

The problem is known as protein folding. Proteins are large complex molecules, made up of strings of amino acids. And as my Times colleague Cade Metz explained in a story on AlphaFold, proteins are “the microscopic mechanisms that drive the behavior of the human body and all other living things.”

What each protein can do, though, largely depends on its unique three-dimensional structure. Once scientists can “identify the shapes of proteins,” added Metz, “they can accelerate the ability to understand diseases, create new medicines and otherwise probe the mysteries of life on Earth.”

But, Science News noted, it has taken “decades of slow-going experiments” to reveal “the structure of more than 194,000 proteins, all housed in the Protein Data Bank.” In 2022, though, “the AlphaFold database exploded with predicted structures for more than 200 million proteins.” For a human that would be worthy of a Nobel Prize. Maybe two.

And with that our understanding of the human body took a giant leap forward. As a 2021 scientific paper, “Unfolding AI’s Potential,” published by the Bipartisan Policy Center, put it, AlphaFold is a meta technology: “Meta technologies have the capacity to … help find patterns that aid discoveries in virtually every discipline.”

ChatGPT is another such meta technology.

But as Dorothy discovered when she was suddenly transported to Oz, there was a good witch and a bad witch there, both struggling for her soul. So it will be with the likes of ChatGPT, Google’s Bard and AlphaFold.

Are we ready? It’s not looking that way: We’re debating whether to ban books at the dawn of a technology that can summarize or answer questions about virtually every book for everyone everywhere in a second.

Like so many modern digital technologies based on software and chips, A.I is “dual use” — it can be a tool or a weapon.

The last time we invented a technology this powerful we created nuclear energy — it could be used to light up your whole country or obliterate the whole planet. But the thing about nuclear energy is that it was developed by governments, which collectively created a system of controls to curb its proliferation to bad actors — not perfectly but not bad.

A.I., by contrast, is being pioneered by private companies for profit. The question we have to ask, Craig argued, is how do we govern a country, and a world, where these A.I. technologies “can be weapons or tools in every domain,” while they are controlled by private companies and are accelerating in power every day? And do it in a way that you don’t throw the baby out with the bathwater.

We are going to need to develop what I call “complex adaptive coalitions” — where business, government, social entrepreneurs, educators, competing superpowers and moral philosophers all come together to define how we get the best and cushion the worst of A.I. No one player in this coalition can fix the problem alone. It requires a very different governing model from traditional left-right politics. And we will have to transition to it amid the worst great-power tensions since the end of the Cold War and culture wars breaking out inside virtually every democracy.

We better figure this out fast because, Toto, we’re not in Kansas anymore.

Source: Our New Promethean Moment

Krauss: Artificially Intelligent Offense?

Of note, yet another concern and issue that needs to be addressed:

…Let’s be clear about this: Valid, empirically derived information is not, in the abstract, either harmful or offensive.

The reception of information can be offensive, and it can, depending upon the circumstances of the listener, potentially result in psychological or physical harm. But precisely because one cannot presume to know all such possible circumstances, following the OpenAI guidelines can instead sanction the censorship of almost any kind of information for fear that someone, somewhere, will be offended.

Even before ChatGPT, this was not a hypothetical worry. Recall the recent firing of a heralded NYT science reporter for using “the N-word” with a group of students in the process of explaining why the use of that word could be inappropriate or hurtful. The argument the NYT editors made was that “intent” was irrelevant. Offense is in the ear of the listener, and that overrides the intent of the speaker or the veracity of his or her argument.

A more relevant example, perhaps, involves the loony guidelines recently provided to editors and reviewers for the journals of the Royal Society of Chemistry to “minimise the risk of publishing inappropriate or otherwise offensive content.” As they describe it, “[o]ffence is a subjective matter and sensitivity to it spans a considerable range; however, we bear in mind that it is the perception of the recipient that we should consider, regardless of the author’s intention [italics mine] … Please consider whether or not any content (words, depictions or imagery) might have the potential to cause offence, referring to the guidelines as needed.”

Moreover, they define offensive content specifically as “Any content that could reasonably offend someone on the basis of their age, gender, race, sexual orientation, religious or political beliefs, marital or parental status, physical features, national origin, social status or disability.”

The mandate against offensiveness propounded by the RSC was taken to another level by the journal Nature Human Behaviour, which indicated that not only would they police language, but they would restrict the nature of scientific research they publish on the basis of social justice concerns about possible “negative social consequences for studied groups.” One can see echoes of both the RSC and Nature actions in the ChatGPT response to my questions.

The essential problem here is removing the obligation, or rather, the opportunity, all of us should have to rationally determine how we respond to potentially offensive content by instead ensuring that any such potentially offensive content may be censored. Intent and accuracy become irrelevant. Veto power in this age of potential victimization is given to the imaginary recipient of information.

Free and open access to information, even information that can cause pain or distress, is essential in a free society. As Christopher Hitchens so often stressed, freedom of speech is primarily important not because it provides an opportunity for speakers to speak out against prevailing winds but because that speech gives listeners or readers the freedom to realize they might want to change their minds.

The problem with the dialogues presented above is that ChatGPT appears to be programmed with a biased perception of what might be offensive or harmful. Moreover, it has been instructed to limit the information it provides to that which its programmers have deemed is neither. What makes this example more than an interesting—or worrying—anecdote is the emerging potential of AI chatbots to further exacerbate already disturbing trends.

As chatbot responses begin to proliferate throughout the Internet, they will, in turn, impact future machine learning algorithms that mine the Internet for information, thus perpetuating and amplifying the impact of the current programming biases evident in ChatGPT.

ChatGPT is admittedly a work in progress, but how the issues of censorship and offense ultimately play out will be important. The last thing anyone should want in the future is a medical diagnostic chatbot that refrains from providing a true diagnosis that may cause pain or anxiety to the receiver. Providing information guaranteed not to disturb is a sure way to squash knowledge and progress. It is also a clear example of the fallacy of attempting to input “universal human values” into AI systems, because one can bet that the choice of which values to input will be subjective.

If the future of AI follows the current trend apparent in ChatGPT, a more dangerous, dystopic machine-based future might not be the one portrayed in the Terminator films but, rather, a future populated by AI versions of Fahrenheit 451firemen.

Source: Artificially Intelligent Offense?

Mims: The AI Boom That Could Make Google and Microsoft Even More Powerful

Good long read. Hard to be optimistic about how the technology will be used. And the regulators will likely be more than a few steps behind corporations:

Seeing the new artificial intelligence-powered chatbots touted in dueling announcements this past week by Microsoft and Googledrives home two major takeaways. First, the feeling of “wow, this definitely could change everything.” And second, the realization that for chat-based search and related AI technologies to have an impact, we’re going to have to put a lot of faith in them and the companies they come from.

When AI is delivering answers, and not just information for us to base decisions on, we’re going to have to trust it much more deeply than we have before. This new generation of chat-based search engines are better described as “answer engines” that can, in a sense, “show their work” by giving links to the webpages they deliver and summarize. But for an answer engine to have real utility, we’re going to have to trust it enough, most of the time, that we accept those answers at face value.

The same will be true of tools that help generate text, spreadsheets, code, images and anything else we create on our devices—some version of which both Microsoft MSFT -0.20%decrease; red down pointing triangle and Google have promised to offer within their existing productivity services, Microsoft 365 and Google Workspace. 

These technologies, and chat-based search, are all based on the latest generation of “generative” AI, capable of creating verbal and visual content and not just processing it the way more established AI has done. And the added trust it will require is one of several ways in which this new generative AI technology is poised to shift even more power into the hands of the biggest tech companies

Generative AI in all its forms will insinuate technology more deeply into the way we live and work than it already is—not just answering our questions but writing our memos and speeches or even producing poetry and art. And because of the financial, intellectual and computational resources needed to develop and run the technology are so enormous, the companies that control these AI systems will be the largest, richest companies.

OpenAI, the creator of the ChatGPT chatbot and DALL-E 2 image generator AIs that have fueled much of the current hype, seemed like an exception to that: a relatively small startup that has driven major AI innovation. But it has leapt into the arms of Microsoft, which has made successive rounds of investment, in part because of the need to pay for the computing power needed to make its systems work. 

The greater concentration of power is all the more important because this technology is both incredibly powerful and inherently flawed: it has a tendency to confidently deliver incorrect information. This means that step one in making this technology mainstream is building it, and step two is minimizing the variety and number of mistakes it inevitably makes.

Trust in AI, in other words, will become the new moat that big technology companies will fight to defend. Lose the user’s trust often enough, and they might abandon your product. For example: In November, Meta made available to the public an AI chat-based search engine for scientific knowledge called Galactica. Perhaps it was in part the engine’s target audience—scientists—but the incorrect answers it sometimes offered inspired such withering criticism that Meta shut down public access to it after just three days, said Meta chief AI scientist Yann LeCun in a recent talk.

Galactica was “the output of a research project versus something intended for commercial use,” says a Meta spokeswoman. In a public statement, Joelle Pineau, managing director of fundamental AI research at Meta, wrote that “given the propensity of large language models such as Galactica to generate text that may appear authentic, but is inaccurate, and because it has moved beyond the research community, we chose to remove the demo from public availability.”

On the other hand, proving your AI more trustworthy could be a competitive advantage more powerful than being the biggest, best or fastest repository of answers. This seems to be Google’s bet, as the company has emphasized in recent announcements and a presentation on Wednesday that as it tests and rolls out its own chat-based and generative AI systems, it will strive for “Responsible AI,” as outlined in 2019 in its “AI Principles.”

My colleague Joanna Stern this past week provided a helpful description of what it’s like to use Microsoft’s Bing search engine and Edge web browser with ChatGPT incorporated. You can join a list to test the service—and Google says it will make its chatbot, named Bard, available at some point in the coming months.

But in the meantime, to see just why trust in these kinds of search engines is so tricky, you can visit other chat-based search engines that already exist. There’s You.com, which will answer your questions via a chatbot, or Andisearch.com, which will summarize any article it returns when you search for a topic on it.

Even these smaller services feel a little like magic. If you ask You.com’s chat module a question like “Please list the best chat AI-based search engines,” it can, under the right circumstances, give you a coherent and succinct answer that includes all the best-known startups in this space. But it can also, depending on small changes in how you phrase that question, add complete nonsense to its answer. 

In my experimentation, You.com would, more often than not, give a reasonably accurate answer, but then add to it the name of a search engine that doesn’t exist at all. Googling the made-up search engine names it threw in revealed that You.com seemed to be misconstruing the names of humans quoted in articles as the names of search engines.

Andi doesn’t return search results in a chat format, precisely because making sure that those answers are accurate is still so difficult, says Chief Executive Angela Hoover. “It’s been super exciting to see these big players validating that conversational search is the future, but nailing factual accuracy is hard to do,” she adds. As a result, for now, Andi offers search results in a conventional format, but offers to use AI to summarize any page it returns.

Andi currently has a team of fewer than 10 people, and has raised $2.5 million so far. It’s impressive what such a small team has accomplished, but it’s clear that making trustworthy AI will require enormous resources, probably on the scale of what companies like Microsoft and Google possess.

There are two reasons for this: The first is the enormous amount of computing infrastructure required, says Tinglong Dai, a professor of operations management at Johns Hopkins University who studies human-AI interaction. That means tens of thousands of computers in big technology companies’ current cloud infrastructures. Some of those computers are used to train the enormous “foundation” models that power generative AI systems. Others specialize in making the trained models available to users, which as the number of users grows can become a more taxing task than the original training.

The second reason, says Dr. Dai, is that it requires enormous human resources to continually test and tune these models, in order to make sure they’re not spouting an inordinate amount of nonsense or biased and offensive speech.

Google has said that it has called on every employee in the company to test its new chat-based search engine and flag any issues with the results it generates. Microsoft, which is already rolling out its chat-based search engine to the public on a limited basis, is doing that kind of testing in public. ChatGPT, on which Microsoft’s chat-based search engine is based, has already proved to be vulnerable to attempts to “jailbreak” it into producing inappropriate content. 

Big tech companies can probably overcome the issues arising from their rollout of AI—Google’s go-slow approach, ChatGPT’s sometimes-inaccurate results, and the incomplete or misleading answers chat-based Bing could offer—by experimenting with these systems on a large scale, as only they can.

“The only reason ChatGPT and other foundational models are so bad at bias and even fundamental facts is they are closed systems, and there is no opportunity for feedback,” says Dr. Dai. Big tech companies like Google have decades of practice at soliciting feedback to improve their algorithmically-generated results. Avenues for such feedback have, for example, long been a feature of both Google Search and Google Maps.

Dr. Dai says that one analogy for the future of trust in AI systems could be one of the least algorithmically-generated sites on the internet: Wikipedia. While the entirely human-written and human-edited encyclopedia isn’t as trustworthy as primary-source material, its users generally know that and find it useful anyway. Wikipedia shows that “social solutions” to problems like trust in the output of an algorithm—or trust in the output of human Wikipedia editors—are possible.

But the model of Wikipedia also shows that the kind of labor-intensive solutions for creating trustworthy AI—which companies like Meta and Google have already employed for years and at scale in their content moderation systems—are likely to entrench the power of existing big technology companies. Only they have not just the computing resources, but also the human resources, to deal with all the misleading, incomplete or biased information their AIs will be generating.

In other words, creating trust by moderating the content generated by AIs might not prove to be so different from creating trust by moderating the content generated by humans. And that is something the biggest technology companies have already shown is a difficult, time-consuming and resource-intensive task they can take on in a way that few other companies can.

The obvious and immediate utility of these new kinds of AIs, when integrated into a search engine or in their many other potential applications, is the reason for the current media, analyst and investor frenzy for AI. It’s clear that this could be a disruptive technology, resetting who is harvesting attention and where they’re directing it, threatening Google’s search monopoly and opening up new markets and new sources of revenue for Microsoft and others.

Based on the runaway success of the ChatGPT AI—perhaps the fastest service to reach 100 million users in history, according to a recent UBS report—it’s clear that being an aggressive first mover in this space could matter a great deal. It’s also clear that being a successful first-mover in this space will require the kinds of resources that only the biggest tech companies can muster.

Source: The AI Boom That Could Make Google and Microsoft Even More Powerful

ChatGPT: Remarkably good on immigration and citizenship

With all the coverage on ChatGPT and its ability to answer normal language questions, I tried out some questions.

Remarkable comprehensive and balanced responses, suggesting that the programmers and others behind it have minimized biases and distortions in the software.

While currently cannot search the internet (may be a blessing!), I am thinking it may supplant Wikipedia for many requests:

—–

What is multiculturalism?

Multiculturalism is the idea that different cultures and ethnic groups can coexist within a single society. It is based on the belief that diversity is a positive aspect of society and that different cultures can enrich one another. In a multicultural society, different cultural groups are allowed to maintain their own customs and traditions, and there is a general tolerance and acceptance of cultural differences. Multiculturalism is often seen as a way to promote harmony and understanding between different groups within a society.

What are the problems with multiculturalism?

There are several potential problems with multiculturalism. One is that it can lead to the segregation of different cultural groups within a society, with each group living in its own separate community. This can make it difficult for people from different backgrounds to interact and understand one another, and it can create social tensions.

Another problem is that some people argue that multiculturalism can undermine national unity and a sense of shared identity. Critics argue that when different cultural groups are allowed to maintain their own customs and traditions, it can create a fragmented society where people are more loyal to their own cultural group than to the nation as a whole.

Additionally, some people argue that the emphasis on cultural diversity can lead to a lack of recognition of the values and traditions that are common to all members of a society. This can make it difficult to build a sense of shared identity and unity among people from different cultural backgrounds.

Which countries have been most successful in implementing multiculturalism? Which countries have been the least successful?

I’m sorry, but I do not have the information needed to answer that question. As a large language model trained by OpenAI, my knowledge is limited to the text that I have been trained on, which includes a wide range of general knowledge but does not include specific, up-to-date information on current events or recent changes in the policies of different countries. Additionally, I am not able to browse the internet, so I do not have access to the most recent data and statistics on the success of different countries’ multiculturalism policies.

What are the benefits of immigration?

There are many potential benefits of immigration. One is that it can help to fill labor shortages and support economic growth. Immigrants often have different skills and experiences than the native population, and they can bring new ideas and perspectives to a country. This can help to drive innovation and support the growth of new industries.

Immigration can also help to promote cultural diversity and understanding. When people from different backgrounds and cultures come together in a new country, they can learn from one another and gain a deeper understanding of different perspectives and ways of life. This can help to create a more vibrant and dynamic society.

Additionally, immigration can help to address demographic challenges, such as an aging population. In countries with low birth rates and an aging population, immigration can help to maintain a healthy balance between the number of workers and the number of people who are retired or unable to work. This can help to ensure that there are enough workers to support social programs and maintain a strong economy.

What are the disadvantages of immigration?

There are also potential disadvantages of immigration. One is that it can put pressure on a country’s public services and infrastructure, such as schools, hospitals, and public transportation. When a large number of people move to a new country, it can be challenging for the government to provide the necessary resources to support them. This can lead to overcrowding and reduced access to public services for everyone.

Another potential disadvantage is that immigration can lead to job competition and wage stagnation for the native population. When there is a large influx of immigrants into a labor market, it can create competition for jobs, which can lead to lower wages for some workers. This can be particularly challenging for workers with lower levels of education and skills, who may have difficulty competing with immigrants for jobs.

Additionally, immigration can create social tensions and conflict, particularly if there is a perception that immigrants are taking jobs away from the native population or are not assimilating into the country’s culture. This can lead to resentment and hostility between different cultural groups within a society, which can be difficult to address.

What are the advantages of immigrants becoming citizens?

There are several advantages to immigrants becoming citizens. One is that it gives them the right to vote and participate in the political process, which allows them to have a say in the decisions that affect their lives and their communities. Citizenship also gives immigrants the right to hold public office, which can help to ensure that their voices and perspectives are represented in government.

Another advantage of citizenship is that it provides immigrants with greater security and stability. As citizens, they are less likely to be deported or separated from their families, and they have the right to live and work in the country without the threat of losing their legal status. This can give them the freedom to pursue their goals and build their lives without the constant fear of being forced to leave.

Additionally, citizenship can give immigrants access to certain benefits and protections that are not available to non-citizens. For example, in many countries, citizens have the right to access public education and healthcare, and they may be eligible for certain government programs and services. Citizenship can also make it easier for immigrants to travel internationally and to sponsor their family members to join them in the country. Overall, becoming a citizen can provide immigrants with greater opportunities and a sense of belonging in their new country.

Write an article on becoming a Canadian citizen

Becoming a Canadian citizen is a rewarding and exciting experience, but it can also be a complex and challenging process. In this article, we will provide an overview of the steps involved in becoming a Canadian citizen, as well as some tips and advice to help you navigate the process successfully.

The first step in becoming a Canadian citizen is to determine if you are eligible. To be eligible, you must be a permanent resident of Canada, have lived in Canada for at least three out of the last five years, and have a good knowledge of English or French. You must also have good moral character and be willing to swear allegiance to the Queen and to the values of Canada.

If you meet these requirements, the next step is to gather the necessary documentation. This includes proof of your permanent resident status, evidence of your time spent in Canada, and proof of your language skills. You will also need to fill out an application form and pay the required fees.

Once you have submitted your application, it will be reviewed by Immigration, Refugees and Citizenship Canada (IRCC). If your application is approved, you will be invited to attend a citizenship ceremony, where you will take the Oath of Citizenship and become a Canadian citizen.

Becoming a Canadian citizen can bring many benefits, such as the right to vote, the ability to hold public office, and access to certain government programs and services. It can also provide you with a sense of belonging and connection to your new country.

If you are interested in becoming a Canadian citizen, it is important to carefully research the process and to ensure that you have all the necessary information and documentation. You can find more information on the IRCC website, or you can contact the IRCC for assistance. With the right preparation and guidance, you can successfully navigate the process and become a proud Canadian citizen.

What is birth tourism?

Birth tourism is the practice of traveling to another country for the purpose of giving birth in order to obtain citizenship for the child. This is typically done in countries that have generous citizenship laws, such as the United States, Canada, and some countries in Europe, where children born on their soil are automatically granted citizenship.

Birth tourism is a controversial practice, as it can allow people to bypass the usual immigration processes and obtain citizenship for their child without having to meet the normal requirements. Critics argue that it undermines the integrity of a country’s citizenship laws and can create a backdoor route to citizenship for people who may not otherwise be eligible. Some countries have introduced measures to crack down on birth tourism, such as requiring parents to provide proof of their ties to the country or imposing additional fees on foreign-born children.

What is employment equity?

Employment equity is a policy or set of policies aimed at promoting equal opportunities and reducing discrimination in the workplace. The goal of employment equity is to ensure that all individuals, regardless of their race, gender, age, or other personal characteristics, have an equal chance to access employment opportunities and to advance in their careers.

In many countries, employment equity programs are designed to address systemic discrimination and to ensure that historically disadvantaged groups, such as women and minorities, have the same opportunities as other groups in the labor market. This can include measures such as setting targets for hiring and promoting members of these groups, providing training and support to help them succeed in the workplace, and addressing workplace policies and practices that may be biased against them.

Overall, the goal of employment equity is to create a more inclusive and diverse workplace, where all individuals have the opportunity to reach their full potential and contribute to the success of the organization.

Krugman: Does :ChatGPT Mean Robots Are Coming For the Skilled Jobs?

Of interest. Much of government work is potentially vulnerable to these technologies. Hope IRCC is exploring this and comparable chat systems to address some of the service pressures:

Will robots take away our jobs?

People have been asking that question for an astonishingly long time. The Regency-era British economist David Ricardo added to the third edition of his classic “Principles of Political Economy,” published in 1821, a chapter titled “On Machinery,” in which he tried to show how the technologies of the early Industrial Revolution could, at least initially, hurt workers. Kurt Vonnegut’s 1952 novel “Player Piano” envisaged a near-future America in which automation has eliminated most employment.

At the level of the economy as a whole, the verdict is clear: So far, machines haven’t done away with the need for workers. U.S. workers are almost five times as productive as they were in the early postwar years, but there has been no long-term upward trend in unemployment:

Higher productivity hasn’t hurt overall employment.
Higher productivity hasn’t hurt overall employment.Credit…FRED

That said, technology can eliminate particular kinds of jobs. In 1948 half a million Americans were employed mining coal; the great bulk of those jobs had disappeared by the early 21st century not because we stopped mining coal — the big decline in coal production, in favor first of natural gas and then of renewable energy, started only around 15 years ago — but because strip mining and mountaintop removal made it possible to extract an increasing amount of coal with many fewer workers:

Some jobs have largely disappeared.
Some jobs have largely disappeared.Credit…FRED

It’s true that the jobs that disappear in the face of technological progress have generally been replaced by other jobs. But that doesn’t mean that the process has been painless. Individual workers may not find it easy to change jobs, especially if the new jobs are in different places. They may find their skills devalued; in some cases, as with coal, technological change can uproot communities and their way of life.

This kind of dislocation has, as I said, been a feature of modern societies for at least two centuries. But something new may be happening now.

In the past, the jobs replaced by technology tended to involve manual labor. Machines replaced muscles. On the one hand, industrial robots replaced routine assembly-line work. On the other hand, there has been ever-growing demand for knowledge workers, a term coined by the management consultant Peter Drucker in 1959 for people engaged in nonrepetitive problem solving. Many people, myself included, have said that we’re increasingly becoming a knowledge economy.

But what if machines can take over a large chunk of what we have historically thought of as knowledge work?

Last week the research company OpenAI released — to enormous buzz from tech circles — a program called ChatGPT, which can carry out what look like natural-language conversations. You can ask questions or make requests and get responses that are startlingly clear and even seem well-informed. You can also do fun things — one colleague recently asked for and received an analysis of secular stagnation in sonnet form — but let’s stick with things that might be economically useful.

ChatGPT is only the latest example of technology that seems to be able to carry out tasks that not long ago seemed to require the services not just of human beings but of humans with substantial formal education.

For example, machine translation from one language to another used to be a joke; some readers may have heard the apocryphal tale of the Russian-English translation program that took “the spirit was willing, but the flesh was weak” and ended up with “the vodka was good, but the meat was spoiled.” These days, translation programs may not produce great literature, but they’re adequate for many purposes. And the same is true in many fields.

You can argue that what we often call artificial intelligence isn’t really intelligence. Indeed, it may be a long time before machines can be truly creative or offer deep insight. But then, how much of what human beings do is truly creative or deeply insightful? (Indeed, how much of what gets published in academic journals — a field of endeavor I know pretty well — meets those criteria?)

So quite a few knowledge jobs may be eminently replaceable.

What will this mean for the economy?

It is difficult to predict exactly how A.I. will impact the demand for knowledge workers, as it will likely vary, depending on the industry and specific job tasks. However, it is possible that in some cases, A.I. and automation may be able to perform certain knowledge-based tasks more efficiently than humans, potentially reducing the need for some knowledge workers. This could include tasks such as data analysis, research and report writing. However, it is also worth noting that A.I. and automation may also create new job opportunities for knowledge workers, particularly in fields related to A.I. development and implementation.

OK, I didn’t write the paragraph you just read; ChatGPT did, in response to the question “How will A.I. affect the demand for knowledge workers?” The giveaway, to me at least, is that I still refuse to use “impact” as a verb. And it didn’t explicitly lay out exactly why we should, overall, expect no impact on aggregate employment. But it was arguably better than what many humans, including some people who imagine themselves smart, would have written.

In the long run, productivity gains in knowledge industries, like past gains in traditional industries, will make society richer and improve our lives in general (unless Skynet kills us all). But in the long run, we are all dead, and even before that, some of us may find ourselves either unemployed or earning far less than we expected, given our expensive educations.

Source: Does ChatGPT Mean Robots Are Coming For the Skilled Jobs?