Will Your Job Still Exist In 2030?

More on the expected impact of automation and AI:

Automation is already here. Robots helped build your car and pack your latest online shopping order. A chatbot might help you figure out your credit card balance. A computer program might scan and process your résumé when you apply for work.

What will work in America look like a decade from now? A team of economists at the McKinsey Global Institute set out to figure it out in a new report out Thursday.

The research finds automation widening the gap between urban and rural areas and dramatically affecting people who didn’t go to college or didn’t finish high school. It also projects some occupations poised for massive growth or growing enough to offset displaced jobs.

Below are some of the key takeaways from McKinsey’s forecast.

Most jobs will change; some will decline“Intelligent machines are going to become more prevalent in every business. All of our jobs are going to change,” said Susan Lund, co-author of the report. Almost 40% of U.S. jobs are in occupations that are likely to shrink — though not necessarily disappear — by 2030, the researchers found.

Employing almost 21 million Americans, office support is by far the most common U.S. occupation that’s most at risk of losing jobs to digital services, according to McKinsey. Food service is another heavily affected category, as hotel, fast-food and other kitchens automate the work of cooks, dishwashers and others.

At the same time, “the economy is adding jobs that make use of new technologies,” McKinsey economists wrote. Those jobs include software developers and information security specialists — who are constantly in short supply — but also solar panel installers and wind turbine technicians.

Health care jobs, including hearing aid specialists and home health aides, will stay in high demand for the next decade, as baby boomers age. McKinsey also forecast growth for jobs that tap into human creativity or “socioemotional skills” or provide personal service for the wealthy, like interior designers, psychologists, massage therapists, dietitians and landscape architects.

In some occupations, even as jobs disappear, new ones might offset the losses. For example, digital assistants might replace counter attendants and clerks who help with rentals, but more workers might be needed to help shoppers in stores or staff distribution centers, McKinsey economists wrote.

Similarly, enough new jobs will be created in transportation or customer service and sales to offset ones lost by 2030.

Employers and communities could do more to match workers in waning fields to other compatible jobs with less risk of automation. For instance, 900,000 bookkeepers, accountants and auditing clerks nationwide might see their jobs phased out but could be retrained to become loan officers, claims adjusters or insurance underwriters, the McKinsey report said.

Automation is likely to continue widening the gap between job growth in urban and rural areas

By 2030, the majority of job growth may be concentrated in just 25 megacities and their peripheries, while large swaths of the country see slower job creation and even lose jobs, the researchers found. This gap has already widened in the past decade, as Federal Reserve Chairman Jerome Powell noted in his remarks on Wednesday.

Source: Will Your Job Still Exist In 2030?

Facial Expression Analysis Can Help Overcome Racial Bias In The Assessment Of Advertising Effectiveness

Interesting. The advertisers are always ahead of the rest of us….:

There has been significant coverage of bias problems in the use of machine learning in the analysis of people. There has also been pushback against the use of facial recognition because of both bias and inaccuracy. However, a more narrow approach to recognition, one focused on recognition emotions rather than identification, can address marketing challenges. Sentiment analysis by survey is one thing, but tracking human facial responses can significantly improve accuracy of the analysis.

The Brookings Institute points to a projection that the US will become a majority-minority nation by 2045. That means that the non-white population will be over 50% of the population. Even before then, the growing demographic shift means that the non-white population has become a significant part of the consumer market. In this multicultural society, it’s important to know if messages work across those cultures. Today’s marketing needs are much more detailed and subtle than the famous example of the Chevy Nova not selling in Latin America because “no va” means “no go” in Spanish.

It’s also important to understand not only the growth of the multicultural markets, but also what they mean in pure dollars. The following chart from the Collage Group shows that the 2017 revenues from the three largest minority segments are similar to the revenues of entire nations.

It would be foolish for companies to ignore these already large and continually growing segments. While there’s the obvious need to be more inclusive in the images, in particular the people, appearing in ads, the picture is only part of the equation. The right words must also be used to interest different demographics. Of course, that a marketing team thinks it has been more inclusive doesn’t make it so. Just as with other aspects of marketing, these messages must be tested.

Companies have begun to look at vision AI for more than the much reported on facial recognition, that of identifying people. While social media and surveys can catch some sentiment, analysis of facial features is even more detailed. That identification is also an easier AI problem than that of full facial identification. Identifying basic facial features such as the mouth and the eyes, then tracking changes based on watching or reading an advertisement can catch not only a smile, but the “strength” of that smile. Other types of sentiment capture can also be scaled.

Then, without having to identify the individual people, information about their demographics can build a picture of how sentiment varies between groups of people. For instance, the same ad can easily get a different typical reaction from white, middle aged women, then from older black men, and from that of East Asian teenagers. With social media polarizing and fragmenting many attitudes, it’s important to understand how marketing messages are received through the target audiences.

The use of AI to rapidly provide feedback on sentiment analysis will help advertisers to better tune messages, whether aiming at a general message that attracts an audience across the US marketing landscape, or finding appropriate focused messages to attract specific demographics. One example of marketers leveraging AI in this arena is Collage Group. They are a market research firm which has helped companies to better understand and improve messaging to minority communities. Collage Group has recently rolled out AdRate, a process for evaluating ads that integrates AI vision to analysis sentiment of the viewers.

“Companies have come to understand the growing multicultural nature of the US consumer market,” said David Wellisch, CEO, Collage Group. “Artificial intelligence is improving Collage Group’s ability to help B2C companies understand the different reactions in varied communities and then adapt their to the best effect.”

While questions of accuracy and ethics in the use of facial recognition will continue in many areas of business, the opportunity to better message to the diversity of the market is a clear benefit. Visual AI to enhance the accuracy of sentiment analysis is clearly a segment that will grow.

Source: Facial Expression Analysis Can Help Overcome Racial Bias In The Assessment Of Advertising Effectiveness

San Francisco Is Right: Facial Recognition Must Be Put On Hold

Good analysis by Manjoo:

What are we going to do about all the cameras? The question keeps me up at night, in something like terror.

Cameras are the defining technological advance of our age. They are the keys to our smartphones, the eyes of tomorrow’s autonomous drones and the FOMO engines that drive Facebook, Instagram, TikTok, Snapchat and Pornhub. Cheap, ubiquitous, viral photography has fed social movements like Black Lives Matter, but cameras are already prompting more problems than we know what to do with — revenge porn, live-streamed terrorism, YouTube reactionaries and other photographic ills.

And cameras aren’t done. They keep getting cheaper and — in ways both amazing and alarming — they are getting smarter. Advances in computer vision are giving machines the ability to distinguish and track faces, to make guesses about people’s behaviors and intentions, and to comprehend and navigate threats in the physical environment. In China, smart cameras sit at the foundation of an all-encompassing surveillance totalitarianism unprecedented in human history. In the West, intelligent cameras are now being sold as cheap solutions to nearly every private and public woe, from catching cheating spouses and package thieves to preventing school shootings and immigration violations. I suspect these and more uses will take off, because in my years of covering tech, I’ve gleaned one ironclad axiom about society: If you put a camera in it, it will sell.

That’s why I worry that we’re stumbling dumbly into a surveillance state. And it’s why I think the only reasonable thing to do about smart cameras now is to put a stop to them.

This week, San Francisco’s board of supervisors voted to ban the use of facial-recognition technology by the city’s police and other agencies. Oakland and Berkeley are also considering bans, as is the city of Somerville, Mass. I’m hoping for a cascade. States, cities and the federal government should impose an immediate moratorium on facial recognition, especially its use by law-enforcement agencies. We might still decide, at a later time, to give ourselves over to cameras everywhere. But let’s not jump into an all-seeing future without understanding the risks at hand.

What are the risks? Two new reports by Clare Garvie, a researcher who studies facial recognition at Georgetown Law, brought the dangers home for me. In one report — written with Laura Moy, executive director of Georgetown Law’s Center on Privacy & Technology — Ms. Garvie uncovered municipal contracts indicating that law enforcement agencies in Chicago, Detroit and several other cities are moving quickly, and with little public notice, to install Chinese-style “real time” facial recognition systems.

In Detroit, the researchers discovered that the city signed a $1 million deal with DataWorks Plus, a facial recognition vendor, for software that allows for continuous screening of hundreds of private and public cameras set up around the city — in gas stations, fast-food restaurants, churches, hotels, clinics, addiction treatment centers, affordable-housing apartments and schools. Faces caught by the cameras can be searched against Michigan’s driver’s license photo database. Researchers also obtained the Detroit Police Department’s rules governing how officers can use the system. The rules are broad, allowing police to scan faces “on live or recorded video” for a wide variety of reasons, including to “investigate and/or corroborate tips and leads.” In a letter to Ms. Garvie, James E. Craig, Detroit’s police chief, disputed any “Orwellian activities,” adding that he took “great umbrage” at the suggestion that the police would “violate the rights of law-abiding citizens.”

I’m less optimistic, and so is Ms. Garvie. “Face recognition gives law enforcement a unique ability that they’ve never had before,” Ms. Garvie told me. “That’s the ability to conduct biometric surveillance — the ability to see not just what is happening on the ground but who is doing it. This has never been possible before. We’ve never been able to take mass fingerprint scans of a group of people in secret. We’ve never been able to do that with DNA. Now we can with face scans.”

That ability alters how we should think about privacy in public spaces. It has chilling implications for speech and assembly protected by the First Amendment; it means that the police can watch who participates in protests against the police and keep tabs on them afterward.

In fact, this is already happening. In 2015, when protests erupted in Baltimore over the death of Freddie Gray while in police custody, the Baltimore County Police Department used facial recognition softwareto find people in the crowd who had outstanding warrants — arresting them immediately, in the name of public safety.

Eyes On Detroit

Detroit’s facial recognition operation taps into high-definition cameras set up around the city under a program called Project Green Light Detroit. Participating businesses send the Detroit Police Department a live feed from their indoor and outdoor cameras. In exchange, they receive “special police attention,” according to the initiative’s website.

Source: Detroit Police Department; Open Street Map | By The New York Times

But there’s another wrinkle in the debate over facial recognition. In a second report, Ms. Garvie found that for all their alleged power, face-scanning systems are being used by the police in a rushed, sloppy way that should call into question their results.

Here’s one of the many crazy stories in Ms. Garvie’s report: In the spring of 2017, a man was caught on a security camera stealing beer from a CVS store in New York. But the camera didn’t get a good shot of the man, and the city’s face-scanning system returned no match.

The police, however, were undeterred. A detective in the New York Police Department’s facial recognition department thought the man in the pixelated CVS video looked like the actor Woody Harrelson. So the detective went to Google Images, got a picture of the actor and ran hisface through the face scanner. That produced a match, and the law made its move. A man was arrested for the crime not because he looked like the guy caught on tape but because Woody Harrelson did.

The robot revolution will be worse for men

Interesting long read and analysis:

Demographics will determine who gets hit worst by automation. Policy will help curb the damage.

The robots will someday take our jobs. But not all our jobs, and we don’t really know how many. Nor do we understand which jobs will be eliminated and which will be transitioned into what some say will be better, less tedious work.

What we do know is that automation and artificial intelligence will affect Americans unevenly, according to data from McKinsey and the 2016 US Census that was analyzed by the Brookings think tank.

Young people — especially those in rural areas or who are underrepresented minorities — will have a greater likelihood of having their jobs replaced by automation. Meanwhile, older, more educated white people living in big cities are more likely to maintain their coveted positions, either because their jobs are irreplaceable or because they’re needed in new jobs alongside our robot overlords.

The Brookings study also warns that automation will exacerbate existing social inequalities along certain geographic and demographic lines, because it will likely eliminate many lower- and middle-skill jobs considered stepping stones to more advanced careers. These jobs losses will be in concentrated in rural areas, particularly the swath of America between the coasts.

However, at least in the case of gender, it’s the men, for once, who will be getting the short end of the stick. Jobs traditionally held by men have a higher “average automation potential” than those held by women, meaning that a greater share of those tasks could be automated with current technology, according to Brookings. That’s because the occupations men are more likely to hold tend to be more manual and more easily replaced by machines and artificial intelligence.

Of course, the real point here is that people of all stripes face employment disruption as new technologies are able to do many of our tasks faster, more efficiently, and more precisely than mere mortals. The implications of this unemployment upheaval are far-reaching and raise many questions: How will people transition to the jobs of the future? What will those jobs be? Is it possible to mitigate the polarizing effects automation will have on our already-stratified society of haves and have-nots?

A recent McKinsey report estimated that by 2030, up to one-third of work activities could be displaced by automation, meaning a large portion of the populace will have to make changes in how they work and support themselves.

“This anger we see among many people across our country feeling like they’re being left behind from the American dream, this report highlights that many of these same people are in the crosshairs of the impact of automation,” said Alastair Fitzpayne, executive director of the Future of Work Initiative at the Aspen Institute.

“Without policy intervention, the problems we see in our economy in terms of wage stagnation, labor participation, alarming levels of growth in low-wage jobs — those problems are likely to get worse, not better,” Fitzpayne told Recode. “Tech has a history that isn’t only negative if you look over the last 150 years. It can improve economic growth, it can create new jobs, it can boost people’s incomes, but you have to make sure the mechanisms are in place for that growth to be inclusive.”

Before we look at potential solutions, here are six charts that break down which groups are going to be affected most by the oncoming automation — and which have a better chance of surviving the robot apocalypse:

Occupation

The type of job you have largely affects your likelihood of being replaced by a machine. Jobs that require precision and repetition — food prep and manufacturing, for example — can be automated much more easily. Jobs that require creativity and critical thinking, like analysts and teachers, can’t as easily be recreated by machines. You can drill down further into which jobs fall under each job type here.

Education

People’s level of education greatly affects the types of work they are eligible for, so education and occupation are closely linked. Less education will more likely land a person in a more automatable job, while more education means more job options.

Age

Younger people are less likely to have attained higher degrees than older people; they’re also more likely to be in entry-level jobs that don’t require as much variation or decision-making as they might have later in life. Therefore, young people are more likely to be employed in occupations that are at risk of automation.

Race

The robot revolution will also increase racial inequality, as underrepresented minorities are more likely to hold jobs with tasks that could be automated — like food service, office administration, and agriculture.

Gender

Men, who have always been more likely to have better jobs and pay than women, also might be the first to have their jobs usurped. That’s because men tend to over-index in production, transportation, and construction jobs — all occupational groups that have tasks with above-average automation exposure. Women, meanwhile, are overrepresented in occupations related to human interaction, like health care and education — jobs that largely require human labor. Women are also now more likely to attain higher education degrees than men, meaning their jobs could be somewhat safer from being usurped by automation.

Location

Heartland states and rural areas — places that have large shares of employment in routine-intensive occupations like those found in the manufacturing, transportation, and extraction industries — contain a disproportionate share of occupations whose tasks are highly automatable. Small metro areas are also highly susceptible to job automation, though places with universities tend to be an exception. Cities — especially ones that are tech-focused and contain a highly educated populace, like New York; San Jose, California; and Chapel Hill, North Carolina — have the lowest automation potential of all.

See how your county could fare on the map below — the darker purple colors represent higher potential for automation:

Note that in none of the charts above are the percentages of tasks that could be automated very small — in most cases, the Brookings study estimates, at least 20 percent of any given demographic will see changes to their tasks due to automation. Of course, none of this means the end of work for any one group, but rather a transition in the way we work that won’t be felt equally.

“The fact that some of the groups struggling most now are among the groups that may face most challenges is a sobering thought,” said Mark Muro, a senior fellow at Brookings’s Metropolitan Policy Program.

In the worst-case scenario, automation will cause unemployment in the US to soar and exacerbate existing social divides. Depending on the estimate, anywhere from 3 million to 80 million people in the US could lose their jobs, so the implications could be dire.

“The Mad Max thing is possible, maybe not here but the impact on developing countries could be a lot worse as there was less stability to begin with,” said Martin Ford, author of Rise of the Robots and Architects of Intelligence. “Ultimately, it depends on the choices we make, what we do, how we can adapt.”

Fortunately, there are a number of potential solutions. The Brookings study and others lay out ways to mitigate job loss, and maybe even make the jobs of the future better and more attainable. The hardest part will be getting the government and private sector to agree on and pay for them. The Brookings policy recommendations include:

  • Create a universal adjustment benefit to laid-off workers. This involves offering career counseling, education, and training in new, relevant skills, and giving displaced workers financial support while they work on getting a new job. But as we know from the first manufacturing revolution, it’s difficult if not impossible to get government and corporations on board with aiding and reeducating displaced low-skilled workers. Indeed, many cities across the Rust Belt have yet to recover from the automation of car and steel plants in the last century. Government universal adjustment programs, which vary in cost based on their size and scope, provide a template but have had their own failings. Some suggest a carbon taxcould be a way to create billions of dollars in revenue for universal benefits or even universal basic income. Additionally, taxing income as opposed to labor — which could become scarcer with automation — provides other ways to fund universal benefits.
  • Maintain a full-employment economy. A focus on creating new jobs through subsidized employment programs will help create jobs for all who want them. Being employed will cushion some of the blow associated with transitioning jobs. Progressive Democrats’ proposed Green New Deal, which would create jobs geared toward lessening the United States’ dependence on fossil fuels, could be one way of getting to full employment. Brookings also recommends a federal monetary policy that prioritizes full employment over fighting inflation — a feasible action, but one that would require a meaningful change to the fed’s longstanding priorities.
  • Introduce portable benefits programs. This would give workers access to traditional employment benefits like health care, regardless of if or where they’re employed. If people are taken care of in the meantime, some of the stress of transitioning to new jobs would be lessened. These benefits also allow the possibility of part-time jobs or gig work — something that has lately become more of a necessity for many Americans. Currently, half of Americans get their health care through their jobs, and doctors and politicians have historically fought against government-run systems. The concept of portable benefits has recently been popular among freelance unions as well as among contract workers employed in gig economy jobs like Uber.
  • Pay special attention to communities that are hardest-hit. As we know from the charts above, some parts of America will have it way worse than others. But there are already a number of programs in place that provide regional protection for at-risk communities that could be expanded upon to deal with disruption from automation. The Department of Defense already does this on a smaller scale, with programs to help communities adjust after base closures or other program cancellations. Automation aid efforts would provide a variety of support, including grants and project management, as well as funding to convert facilities into new uses. Additionally, “Opportunity Zones” in the tax code — popular among the tech set — give companies tax breaks for investing in low-income areas. These investments in turn create jobs and stimulate spending in areas where it’s most needed.
  • Increased investment in AI, automation, and related technology. This may seem counterintuitive, seeing as automation is causing many of these problems in the first place, but Brookings believes that embracing this new tech — not erecting barriers to the inevitable — will generate the economic productivity needed to increase both standards of living and jobs outside of those that will be automated. “We are not vilifying these technologies; we are calling attention to positive side effects,” Brookings’s Muro said. “These technologies will be integral in boosting American productivity, which is dragging.”

None of these solutions, of course, is a silver bullet, but in conjunction, they could help mitigate some of the pain Americans will face from increased automation — if we act soon. Additionally, many of these ideas currently seem rather progressive, so they could be difficult to implement in a Republican-led government.

“I’m a long-run optimist. I think we will work it out. We have to — we have no choice,” Ford told Recode, emphasizing that humanity also stands to gain huge benefits from using AI and robotics to solve our biggest problems, like climate change and disease.

“The short term, though, could be tough — I worry about our ability to react in that time frame,” Ford said, especial given the current political climate. “But there comes a point when the cost of not adapting exceeds the cost of change.”

Source: The robot revolution will be worse for men

The Hidden Automation Agenda of the Davos Elite

Clarity and likely greater impact on the labour force needed and immigration levels:

They’ll never admit it in public, but many of your bosses want machines to replace you as soon as possible.

I know this because, for the past week, I’ve been mingling with corporate executives at the World Economic Forum’s annual meeting in Davos. And I’ve noticed that their answers to questions about automation depend very much on who is listening.

In public, many executives wring their hands over the negative consequences that artificial intelligence and automation could have for workers. They take part in panel discussions about building “human-centered A.I.” for the “Fourth Industrial Revolution” — Davos-speak for the corporate adoption of machine learning and other advanced technology — and talk about the need to provide a safety net for people who lose their jobs as a result of automation.

But in private settings, including meetings with the leaders of the many consulting and technology firms whose pop-up storefronts line the Davos Promenade, these executives tell a different story: They are racing to automate their own work forces to stay ahead of the competition, with little regard for the impact on workers.

All over the world, executives are spending billions of dollars to transform their businesses into lean, digitized, highly automated operations. They crave the fat profit margins automation can deliver, and they see A.I. as a golden ticket to savings, perhaps by letting them whittle departments with thousands of workers down to just a few dozen.

“People are looking to achieve very big numbers,” said Mohit Joshi, the president of Infosys, a technology and consulting firm that helps other businesses automate their operations. “Earlier they had incremental, 5 to 10 percent goals in reducing their work force. Now they’re saying, ‘Why can’t we do it with 1 percent of the people we have?’”

Few American executives will admit wanting to get rid of human workers, a taboo in today’s age of inequality. So they’ve come up with a long list of buzzwords and euphemisms to disguise their intent. Workers aren’t being replaced by machines, they’re being “released” from onerous, repetitive tasks. Companies aren’t laying off workers, they’re “undergoing digital transformation.”

A 2017 survey by Deloitte found that 53 percent of companies had already started to use machines to perform tasks previously done by humans. The figure is expected to climb to 72 percent by next year.

The corporate elite’s A.I. obsession has been lucrative for firms that specialize in “robotic process automation,” or R.P.A. Infosys, which is based in India, reported a 33 percent increase in year-over-year revenue in its digital division. IBM’s “cognitive solutions” unit, which uses A.I. to help businesses increase efficiency, has become the company’s second-largest division, posting $5.5 billion in revenue last quarter. The investment bank UBS projects that the artificial intelligence industry could be worth as much as $180 billion by next year.

Kai-Fu Lee, the author of “AI Superpowers” and a longtime technology executive, predicts that artificial intelligence will eliminate 40 percent of the world’s jobs within 15 years. In an interview, he said that chief executives were under enormous pressure from shareholders and boards to maximize short-term profits, and that the rapid shift toward automation was the inevitable result.

The Milwaukee offices of the Taiwanese electronics maker Foxconn, whose chairman has said he plans to replace 80 percent of the company’s workers with robots in five to 10 years.CreditLauren Justice for The New York Times

“They always say it’s more than the stock price,” he said. “But in the end, if you screw up, you get fired.”

Other experts have predicted that A.I. will create more new jobs than it destroys, and that job losses caused by automation will probably not be catastrophic. They point out that some automation helps workers by improving productivity and freeing them to focus on creative tasks over routine ones.

But at a time of political unrest and anti-elite movements on the progressive left and the nationalist right, it’s probably not surprising that all of this automation is happening quietly, out of public view. In Davos this week, several executives declined to say how much money they had saved by automating jobs previously done by humans. And none were willing to say publicly that replacing human workers is their ultimate goal.

“That’s the great dichotomy,” said Ben Pring, the director of the Center for the Future of Work at Cognizant, a technology services firm. “On one hand,” he said, profit-minded executives “absolutely want to automate as much as they can.”

“On the other hand,” he added, “they’re facing a backlash in civic society.”

For an unvarnished view of how some American leaders talk about automation in private, you have to listen to their counterparts in Asia, who often make no attempt to hide their aims. Terry Gou, the chairman of the Taiwanese electronics manufacturer Foxconn, has said the company plans to replace 80 percent of its workers with robots in the next five to 10 years. Richard Liu, the founder of the Chinese e-commerce company JD.com, said at a business conferencelast year that “I hope my company would be 100 percent automation someday.”

One common argument made by executives is that workers whose jobs are eliminated by automation can be “reskilled” to perform other jobs in an organization. They offer examples like Accenture, which claimed in 2017 to have replaced 17,000 back-office processing jobs without layoffs, by training employees to work elsewhere in the company. In a letter to shareholders last year, Jeff Bezos, Amazon’s chief executive, said that more than 16,000 Amazon warehouse workers had received training in high-demand fields like nursing and aircraft mechanics, with the company covering 95 percent of their expenses.

But these programs may be the exception that proves the rule. There are plenty of stories of successful reskilling — optimists often cite a program in Kentucky that trained a small group of former coal miners to become computer programmers — but there is little evidence that it works at scale. A report by the World Economic Forum this month estimated that of the 1.37 million workers who are projected to be fully displaced by automation in the next decade, only one in four can be profitably reskilled by private-sector programs. The rest, presumably, will need to fend for themselves or rely on government assistance.

In Davos, executives tend to speak about automation as a natural phenomenon over which they have no control, like hurricanes or heat waves. They claim that if they don’t automate jobs as quickly as possible, their competitors will.

“They will be disrupted if they don’t,” said Katy George, a senior partner at the consulting firm McKinsey & Company.

Automating work is a choice, of course, one made harder by the demands of shareholders, but it is still a choice. And even if some degree of unemployment caused by automation is inevitable, these executives can choose how the gains from automation and A.I. are distributed, and whether to give the excess profits they reap as a result to workers, or hoard it for themselves and their shareholders.

The choices made by the Davos elite — and the pressure applied on them to act in workers’ interests rather than their own — will determine whether A.I. is used as a tool for increasing productivity or for inflicting pain.

“The choice isn’t between automation and non-automation,” said Erik Brynjolfsson, the director of M.I.T.’s Initiative on the Digital Economy. “It’s between whether you use the technology in a way that creates shared prosperity, or more concentration of wealth.”

Amazon Is Pushing Facial Technology That a Study Says Could Be Biased

Of note. These kinds of studies are important to expose the bias inherent in some corporate facial recognition systems:

Over the last two years, Amazon has aggressively marketed its facial recognition technology to police departments and federal agencies as a service to help law enforcement identify suspects more quickly. It has done so as another tech giant, Microsoft, has called on Congress to regulate the technology, arguing that it is too risky for companies to oversee on their own.

Now a new study from researchers at the M.I.T. Media Lab has found that Amazon’s system, Rekognition, had much more difficulty in telling the gender of female faces and of darker-skinned faces in photos than similar services from IBM and Microsoft. The results raise questions about potential bias that could hamper Amazon’s drive to popularize the technology.

In the study, published Thursday, Rekognition made no errors in recognizing the gender of lighter-skinned men. But it misclassified women as men 19 percent of the time, the researchers said, and mistook darker-skinned women for men 31 percent of the time. Microsoft’s technology mistook darker-skinned women for men just 1.5 percent of the time.

A study published a year ago found similar problems in the programs built by IBM, Microsoft and Megvii, an artificial intelligence company in China known as Face++. Those results set off an outcry that was amplified when a co-author of the study, Joy Buolamwini, posted YouTube videos showing the technology misclassifying famous African-American women, like Michelle Obama, as men.

The companies in last year’s report all reacted by quickly releasing more accurate technology. For the latest study, Ms. Buolamwini said, she sent a letter with some preliminary results to Amazon seven months ago. But she said that she hadn’t heard back from Amazon, and that when she and a co-author retested the company’s product a couple of months later, it had not improved.

Matt Wood, general manager of artificial intelligence at Amazon Web Services, said the researchers had examined facial analysis — a technology that can spot features such as mustaches or expressions such as smiles — and not facial recognition, a technology that can match faces in photos or video stills to identify individuals. Amazon markets both services.

“It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case — including law enforcement — based on results obtained using facial analysis,” Dr. Wood said in a statement. He added that the researchers had not tested the latest version of Rekognition, which was updated in November.

Amazon said that in recent internal tests using an updated version of its service, the company found no difference in accuracy in classifying gender across all ethnicities.

The M.I.T. researchers used these and other photos to study the accuracy of facial technology in identifying gender.

With advancements in artificial intelligence, facial technologies — services that can be used to identify people in crowds, analyze their emotions, or detect their age and facial characteristics — are proliferating. Now, as companies begin to market these services more aggressively for uses like policing and vetting job candidates, they have emerged as a lightning rod in the debate about whether and how Congress should regulate powerful emerging technologies.

The new study, scheduled to be presented Monday at an artificial intelligence and ethics conference in Honolulu, is sure to inflame that argument.

Proponents see facial recognition as an important advance in helping law enforcement agencies catch criminals and find missing children. Some police departments, and the Federal Bureau of Investigation, have tested Amazon’s product.

But civil liberties experts warn that it can also be used to secretly identify people — potentially chilling Americans’ ability to speak freely or simply go about their business anonymously in public.

Over the last year, Amazon has come under intense scrutiny by federal lawmakers, the American Civil Liberties Union, shareholders, employees and academic researchers for marketing Rekognition to law enforcement agencies. That is partly because, unlike Microsoft, IBM and other tech giants, Amazon has been less willing to publicly discuss concerns.

Amazon, citing customer confidentiality, has also declined to answer questions from federal lawmakers about which government agencies are using Rekognition or how they are using it. The company’s responses have further troubled some federal lawmakers.

“Not only do I want to see them address our concerns with the sense of urgency it deserves,” said Representative Jimmy Gomez, a California Democrat who has been investigating Amazon’s facial recognition practices. “But I also want to know if law enforcement is using it in ways that violate civil liberties, and what — if any — protections Amazon has built into the technology to protect the rights of our constituents.”

In a letter last month to Mr. Gomez, Amazon said Rekognition customers must abide by Amazon’s policies, which require them to comply with civil rights and other laws. But the company said that for privacy reasons it did not audit customers, giving it little insight into how its product is being used.

The study published last year reported that Microsoft had a perfect score in identifying the gender of lighter-skinned men in a photo database, but that it misclassified darker-skinned women as men about one in five times. IBM and Face++ had an even higher error rate, each misclassifying the gender of darker-skinned women about one in three times.

Ms. Buolamwini said she had developed her methodology with the idea of harnessing public pressure, and market competition, to push companies to fix biases in their software that could pose serious risks to people.

Ms. Buolamwini, who had done similar tests last year, conducted another round to learn whether industry practices had changed, she said.CreditTony Luong for The New York Times

“One of the things we were trying to explore with the paper was how to galvanize action,” Ms. Buolamwini said.

Immediately after the study came out last year, IBM published a blog post, “Mitigating Bias in A.I. Models,” citing Ms. Buolamwini’s study. In the post, Ruchir Puri, chief architect at IBM Watson, said IBM had been working for months to reduce bias in its facial recognition system. The company post included test results showing improvements, particularly in classifying the gender of darker-skinned women. Soon after, IBM released a new system that the company said had a tenfold decrease in error rates.

A few months later, Microsoft published its own post, titled “Microsoft improves facial recognition technology to perform well across all skin tones, genders.” In particular, the company said, it had significantly reduced the error rates for female and darker-skinned faces.

Ms. Buolamwini wanted to learn whether the study had changed overall industry practices. So she and a colleague, Deborah Raji, a college student who did an internship at the M.I.T. Media Lab last summer, conducted a new study.

In it, they retested the facial systems of IBM, Microsoft and Face++. They also tested the facial systems of two companies that were not included in the first study: Amazon and Kairos, a start-up in Florida.

The new study found that IBM, Microsoft and Face++ all improved their accuracy in identifying gender.

By contrast, the study reported, Amazon misclassified the gender of darker-skinned females 31 percent of the time, while Kairos had an error rate of 22.5 percent.

Melissa Doval, the chief executive of Kairos, said the company, inspired by Ms. Buolamwini’s work, released a more accurate algorithm in October.

Ms. Buolamwini said the results of her studies raised fundamental questions for society about whether facial technology should not be used in certain situations, such as job interviews, or in products, like drones or police body cameras.

Some federal lawmakers are voicing similar issues.

“Technology like Amazon’s Rekognition should be used if and only if it is imbued with American values like the right to privacy and equal protection,” said Senator Edward J. Markey, a Massachusetts Democrat who has been investigating Amazon’s facial recognition practices. “I do not think that standard is currently being met.”

Source: Amazon Is Pushing Facial Technology That a Study Says Could Be Biased

AI and the Automation of Jobs Disproportionately Affect Women, World Economic Forum Warns

Interesting analysis of gender and AI:

Women are disproportionately affected by the automation of jobs and development of artificial intelligence, which could widen the gender gap if more women are not encouraged to enter the fields of science, technology and engineering, the World Economic Forum warned on Monday.

Despite statistics showing that the economic opportunity gap between men and women narrowed slightly in 2018, the report from the World Economic Forum finds there are proportionally fewer women than men joining the workforce, largely due to the growth of automation and artificial intelligence.

According to the findings, the automation of certain jobs has impacted many roles traditionally held by women. Women also continue to be underrepresented in industries that utilize science, technology, engineering and mathematics (STEM) skills. This affects their presence in the booming field of AI. Currently, women make up 22% of AI professionals, a gender gap three times larger than other industries.

“This year’s analysis also warns about the possible emergence of new gender gaps in advanced technologies, such as the risks associated with emerging gender gaps in Artificial Intelligence-related skills,” the report’s authors write. “In an era when human skills are increasingly important and complementary to technology, the world cannot afford to deprive itself of women’s talent in sectors in which talent is already scarce.”

The World Economic Forum report ranked the the United States 51st worldwide for gender equality — above average, but below many other developed countries, as well as less-developed nations like Nicaragua, Rwanda and the Philippines. Women in the U.S. had better economic opportunities than those in Austria, Italy, South Korea and Japan, according to the World Economic Forum.

The U.S. fell two spots from its ranking 2017. While the gender gap improved slightly in economic opportunity and participation, the gap between men and women regarding access to education and political empowerment reversed, in part due to a decline in gender equality in top government positions.

The World Economic Forum, which is known for its annual conference in Davos, Switzerland, measured the gender gap around the world across four factors – political empowerment, economic opportunity, educational attainment and health and survival – to find that the gap has closed 68%, a slight improvement from 2017, which marked the first year since 2006 that the gender gap widened.

As the gender gap stands now, it will take about 108 years to close completely and 202 years to achieve total parity in the workplace.

Source: AI and the Automation of Jobs Disproportionately Affect Women, World Economic Forum Warns

Why no one really knows how many jobs automation will replace

Even though I have argued that immigration planning needs to factor in the possible impact of AI and automation, this note of caution should also be part of that analysis:

Tech CEOs and politicians alike have issued grave warnings about the capability of automation, including AI, to replace large swaths of our current workforce. But the people who actually study this for a living — economists — have very different ideas about just how large the scale of that automation will be.

For example, researchers at Citibank and the University of Oxford estimated that 57 percent of jobs in OECD countries — an international group of 36 nations including the U.S. — were at high risk of automation within the next few decades. In another well-cited study, researchers at the OECD calculated only 14 percent of jobs to be at high risk of automation within the same timeline. That’s a big range when you consider this means a difference of hundreds of millions of potential lost jobs in the next few decades.

Of course, technology also has the capability to create new jobs — or just change the nature of the work people are doing — rather than eliminate jobs altogether. But sizing the scope of sheer job loss is an important metric, because for every job lost, a member of the workforce will have to find a new one, oftentimes in an entirely different profession.

Even within the scope of the U.S., the estimates for how many jobs could be lost in a single year vary widely. Earlier this year, MIT Technology Review analyzed and plotted dozens of across-the-board predictions from researchers at places like McKinsey Global Institute, Gartner and the International Federation of Robotics. Here, we’ve charted some of the data they compiled, with some of our own analysis from additional reports:

So why do these predictions cover so much range? Recode asked leading academics and economists in the field and found some of the challenges in sizing how automation and similar technology will change the workforce:

Just because a technology exists doesn’t mean it’s going to be used

Even as new groundbreaking tech becomes available, there’s no guarantee that it will be implemented right away. For example, while autonomous-vehicle technology could one day eliminate or change the jobs of the estimated five million workers in the U.S. who drive professionally, there’s a long road ahead to getting legal clearance to do that.

“The fact that a job can be automated doesn’t mean it will be,” Glenda Quintini, a senior economist at the OECD, told Recode. “There’s a question of implementing, the cost of labor versus technology, and social desirability.”

Jobs involve a mix of tasks

Take the job of a waiter. A robot may be able to take over some aspects of that job, like taking orders, serving the food or handling payments. But other parts, like dealing with an angry customer, maybe less so. Some studies, such as the OECD report, assess the likelihood of each task within an occupation, while the Oxford studies make an overall assessment of each job.

There’s a debate among academics about which methodology makes more sense. The authors of the OECD report say that the granularity in their approach is more accurate, while the Oxford report authors argue that for most occupations, the detailed tasks don’t matter: As long as technology like AI can do the critical portion of the work, it ultimately has a binary “yes” or “no” capability to be automated.

The data isn’t good enough because it only measures what we know

To model the future, researchers have to start with data from the present — which is not always perfect. Economists do their best to take inventory of all the jobs out there and what tasks they involve, but this list admittedly isn’t exhaustive.

“There’s no assurance in the end that that we’ve captured every aspect of those jobs, so inevitably we might be overlooking some things,” said Carl Benedikt Frey, an economist at the University of Oxford.

It helps to know just how these experts make the predictions to fully understand the room for human error. In the case of the Oxford study, researchers gathered a list of hundreds of occupations and asked a panel of machine learning experts to make their best judgment as to whether or not some of those jobs were likely to be computerized. The researchers weighed in on only 70 out of the about 702 total jobs that they were most confident they could assess.

For the rest of the occupations, the researchers used an algorithm that attributed a numerical value to how much each job included tasks that are technology bottlenecks — things like “the ability to come up with unusual or clever ideas” or “persuading others to change their minds or behavior.” But ultimately, even that algorithmic modeling isn’t perfect, because not everybody agrees on just how socially complex any given job is. So while quantitative models can help reduce bias, they don’t eliminate it completely, and that can trickle down into differences in the final results.

For all these reasons, some academics prefer not to forecast an exact number of jobs lost in a specific timeframe, but instead focus on the relative percentage of jobs in an economy at risk.

“All of these studies that have tried to put a number on how many jobs are going to be lost in a decade or two decades or five years — they’re trying to do something that is just impossible,” Frey said.

Economist John Maynard Keynes famously said that by 2030, due to rapid advancements in technology, we’d see widespread “technological unemployment” and be working an average of only 15 hours a week. It was a positive vision for a world where mankind would finally have “freedom from pressing economic cares” and live a life of leisure. Those estimates seem widely overblown now. While Keynes was right that technology has helped increase productivity in entirely new industries, the average workweek in the U.S. hasn’t declined since the 1970s.

Thanks in large part to persistent wage stagnation and rising income inequality in the last few decades, most people still have to work just as many hours as they did before in order to make ends meet.

Keynes’s comments remind us that there’s a bad track record of punditry in this field, and that even the greats can be wrong when it comes to predicting just how much, or how fast, technology will impact the workforce.

Source: Why no one really knows how many jobs automation will replace

Accenture: Is artificial intelligence sexist?

An interesting look at the bias question of AI and some AI and related techniques to reduce bias. While written in terms of gender, the approach (use analytics, bias hunting algorithms, fairness software tools) could be deployed more widely:

Artificial intelligence (AI) is bringing amazing changes to the workplace, and it’s raising a perplexing question: Are those robots sexist?

While it may sound strange that AI could be gender-biased, there’s evidence that it’s happening when organizations aren’t taking the right steps.

In the age of #MeToo and the drive to achieve gender parity in the workplace, it’s critical to understand how and why this occurs and to continue to take steps to address the imbalance. At Accenture, a global professional services company, we have set a goal to have a gender-balanced work force by 2025. There is no shortage of examples that demonstrate how a diverse mindset leads to better results, from reports of crash test dummies that are modelled only on male bodies, to extensive academic studies on the performance improvements at firms with higher female representation. We know that diversity makes our business stronger and more innovative – and it is quite simply the right thing to do.

To make sure that AI is working to support this goal, it’s imperative to know how thought leaders, programmers and developers can use AI to fix the problem.

The issue matters because Canadian workplaces still suffer from gender inequality. Analysis by the Canadian Press earlier this year found that none of Canada’s TSX 60 companies listed a woman as its chief executive officer, and two-thirds did not include even one female among their top earners in their latest fiscal year.

Add to this the reports about behaviour in the workplace that undermines the principles of diversity and inclusion. Of course, AI isn’t the cause, but it can perpetuate the problem unless we focus on solutions. AI can contribute to biased behaviour because the knowledge that goes into its algorithm-based technology came from humans. AI “learns” to make decisions and solve complex problems, but the roots of its knowledge come from whatever we teach it.

There are lots of examples showing that what we put into AI can lead to bias:

  • A team of researchers at the University of Washington studied the top 100 Google image search results for 45 professions. Women were generally under-represented in the searches, as compared with representation data from the Bureau of Labor Statistics. The images of women were also frequently more risqué than how a female worker would actually show up for some jobs, such as construction. Finally, at the time, 27 per cent of American CEOs were women, but only 11 per cent of the Google image results for “CEO” were women (not including Barbie).
  • In a study by Microsoft’s Ece Kamar and Stanford University’s Himabindu Lakkaraju, the researchers acknowledged that the Google images system relies on training data, which could lead to blind spots. For instance, an AI algorithm could see photos of black dogs and white and brown cats – but when shown a photo of a white dog, it may mistake it for a cat.
  • An AI research scientist named Margaret Mitchell trained computers to have human-like reactions to sequences of images. A machine saw a house burning to bits. It described the view as “an amazing view” and “spectacular” – seeing only the contrast and bright colours, not the destruction. This came after the computer was shown a sequence of solely positive images, reflecting a limited viewpoint.
  • Late last year, media reported on Google Translate converting the names of occupations from Turkish, a gender-neutral language, to English. The translator-bots decided, among other things, that a doctor must be a “he,” while any nurse had to be “she.”

These examples come from biased training data, where one or more groups may be under-represented or not represented at all. It’s a problem that can exacerbate gender bias when AI is used for hiring and human resources. Statistical biases can also exist in areas including forecasting, reporting and selection.

The bias can come from inadequate labelling of the populations within the data for example, there were too few white dogs represented in the database of the machine looking at dogs and cats. Or it can come from machines working with variables that are highly co-related but rely too much on certain types of data; for example, weeding out job candidates because their address is from a women’s dorm on campus, without realizing it was keeping out female applicants.

Gender bias can also come from poor human judgment in what information goes into AI and its algorithms. For example, a job search algorithm may be told by its programmers to concentrate on graduates from certain programs in particular geographic locations, which happen to have few women enrolled.

Ironically, one of the best ways to fix AI gender bias involves deploying AI.

The first step is to use analytics to identify gender bias in AI. A Boston-based firm called Palatine Analytics ran an AI-based study looking at performance reviews at five companies. At first the study found that men and women were equally likely to meet their work goals. A deeper, AI-based analysis found that when men were reviewing other men, they gave them higher scores than they gave to women – which was leading to women getting promoted less frequently than men. Traditional analytics looked only at the scores, while the AI-based research helped analyze who was giving out the marks.

A second method to weed out gender bias is to develop algorithms that can hunt it down. Scientists at Boston University have been working with Microsoft on a concept called word embeddings – sets of data that serve as a kind of computer dictionary used by AI programs. They’ve combed through hundreds of billions of words from public data, keeping legitimate correlations (man is to king as woman is to queen) and altering ones that are biased (man is to computer programmer as woman is to homemaker), to create an unbiased public data set.

The third step is to design software that can root out bias in AI decision-making. Accenture has created an AI Fairness Tool, which looks for patterns in data that feed into its machines, and then tests and retests the algorithms to root out bias. This includes the subtle forms that humans might not see too easily to ensure people are being fairly tested. For example, one startup called Knockri uses video analytics and AI to screen job candidates; another, Textio, has a database of some 240 million job posts, to which it applies AI to root out biased terms.

AI and gender bias may seem like a problem, but it comes with its own solution. It’s our future – developing and deploying the technology properly can take us from #MeToo to a better hashtag: #GettingToEqual.

What happens when artificial intelligence comes to Ottawa

More on the note of caution of government adoption of AI for decision-making (Ottawa’s use of AI in immigration system has profound implications for human rights):

There is a notion that the choices a computer algorithm makes on our behalf are neutral and somehow more reliable than our notoriously faulty human decision-making.

But, as a new report presented on Parliament Hill Wednesday points out, artificial intelligence isn’t pristine, absolute wisdom downloaded from the clouds. Rather, it’s shaped by the ideas and priorities of the human beings who build it and by the database of examples those architects feed into the machine’s “brain” to help it “learn” and build rules on which to operate.

Much like a child is a product of her family environment—what her parents teach her, what they read to her and show her of the world—artificial intelligence sees the world through the lens we provide for it. This new report, entitled “Bots at the Gate,” contemplates how decisions rendered by artificial intelligence (AI) in Canada’s immigration and refugee systems could impact the human rights, safety and privacy of people who are by definition among the most vulnerable and least able to advocate for themselves.

The report says the federal government has been “experimenting” with AI in limited immigration and refugee applications since at least 2014, including with “predictive analytics” meant to automate certain activities normally conducted by immigration officials. “The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others,” the document warns. “These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.”

Citing ample evidence of how biased and confused—how human—artificial intelligence can be, the report from the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy makes the case for a very deliberate sort of caution.

The authors mention how a search engine coughs up ads for criminal record checks when presented with a name it associates with a black identity. A woman searching for jobs sees lower-paying opportunities than a man doing the same search. Image recognition software matches a photo of a woman with another of a kitchen. An app store suggests a sex offender search as related to a dating app for gay men.

“You have this huge dataset, you just feed it into the algorithm and trust it to pick out the patterns,” says Cynthia Khoo, a research fellow at the Citizen Lab and a lawyer specializing in technology. “If that dataset is based on a pre-existing set of human decisions, and human decisions are also faulty and biased—if humans have been traditionally racist, for example, or biased in other ways—then that pattern will simply get embedded into the algorithm and it will say, ‘This is the pattern. This is what they want, so I’m going to keep replicating that.’”

Immigration, Refugees and Citizenship Canada says the department launched two pilot projects in 2018 using computer analytics to identify straightforward and routine Temporary Resident Visa applications from China and India for faster processing. “The use of computer analytics is not intended to replace people,” the department said. “It is another tool to support officers and others in managing our ever-increasing volume of applications. Officers will always remain central to IRCC’s processing.”

This week, the report’s authors made the rounds on the Hill, presenting their findings and concerns to policy-makers. “It does now sound like it’s a measured approach,” says Petra Molnar, a lawyer and technology and human rights researcher with the IHRP. “Which is great.”

Other countries offer cautionary tales rather than best practices. “The algorithm that was used [to determine] whether or not someone was detained at the U.S.-Mexico border was actually set to detain everyone and used as a corroboration for the extension of the detention practices of the Trump administration,” says Molnar.

And in 2016, the U.K. government revoked the visas of 36,000 foreign students after automated voice analysis of their English language equivalency exams suggested they may have cheated and sent someone else to the exam in their place. When the automated voice analysis was compared to human analysis, however, it was found to be wrong over 20 per cent of the time—meaning the U.K. may have ejected 7,000 foreign students who had done nothing wrong.

The European Union’s General Data Protection Regulation that came into force in April 2018, on the other hand, is the gold standard, enshrining such concepts as “the right to an explanation,” or the legal certainty that if your data was processed by an automated tool, you have the right to know how it was done.

Immigration and refugee decisions are both opaque and highly discretionary even when rendered by human beings, argues Molnar, pointing out that two different immigration officers may look at the same file and reach different decisions. The report argues that lack of transparency reaches a different level when you introduce AI into the equation, outlining three distinct reasons.

First, automated decision-making solutions are often created by outside entities that sell them to government agencies, so the source code, training data and other information would be proprietary and hidden from public view.

Second, full disclosure of the guts of these programs might be a bad idea anyway because it could allow people to “game” the system.

“Third, as these systems become more sophisticated (and as they begin to learn, iterate, and improve upon themselves in unpredictable or otherwise unintelligible ways), their logic often becomes less intuitive to human onlookers,” the authors explain. “In these cases, even when all aspects of a system are reviewable and superficially ‘transparent,’ the precise rationale for a given output may remain uninterpretable and unexplainable.” Many of these systems end up inscrutable black boxes that could spit out determinations on the futures of vulnerable people, the report argues.

Her group aims to use a “carrot-and-stick approach,” Khoo says, urging the federal government to make Canada a world leader on this in both a human rights and high-tech context. It’s a message that may find a receptive audience with a government that has been eager to make both halves of that equation central to its brand at home and abroad.

But they’ll have to move fast: If AI is currently in a nascent state in policy decisions that shape real people’s lives, it’s growing fast and won’t stay there for long.

“This is happening everywhere,” Khoo says.

Source: What happens when artificial intelligence comes to Ottawa