A.I. Systems Echo Biases They’re Fed, Putting Scientists on Guard

Yet another article emphasizing the risks:

Last fall, Google unveiled a breakthrough artificial intelligence technology called BERT that changed the way scientists build systems that learn how people write and talk.

But BERT, which is now being deployed in services like Google’s internet search engine, has a problem: It could be picking up on biases in the way a child mimics the bad behavior of his parents.

BERT is one of a number of A.I. systems that learn from lots and lots of digitized information, as varied as old books, Wikipedia entries and news articles. Decades and even centuries of biases — along with a few new ones — are probably baked into all that material.

BERT and its peers are more likely to associate men with computer programming, for example, and generally don’t give women enough credit. One program decided almost everything written about President Trump was negative, even if the actual content was flattering.

AI system for granting UK visas is biased, rights groups claim

Always a challenge with AI, ensuring that the algorithms do not replicate or create bias:

Immigrant rights campaigners have begun a ground-breaking legal case to establish how a Home Office algorithm that filters UK visa applications actually works.

The challenge is the first court bid to expose how an artificial intelligence program affects immigration policy decisions over who is allowed to enter the country.

Foxglove, a new advocacy group promoting justice in the new technology sector, is supporting the case brought by the Joint Council for the Welfare of Immigrants (JCWI) to legally force the Home Office to explain on what basis the algorithm “streams” visa applicants.

The two groups both said they feared the AI “streaming tool” created three channels for applicants including a “fast lane” that would lead to “speedy boarding for white people”.

The Home Office has insisted that the algorithm is used only to allocate applications and does not ultimately rule on them. The final decision remains in the hands of human caseworkers and not machines, it said.

A spokesperson for the Home Office said: “We have always used processes that enable UK Visas and Immigration to allocate cases in an efficient way.

“The streaming tool is only used to allocate applications, not to decide them. It uses data to indicate whether an application might require more or less scrutiny and it complies fully with the relevant legislation under the Equalities Act 2010.”

Cori Crider, a director at Foxglove, rejected the Home Office’s defence of the AI system.

Source: AI system for granting UK visas is biased, rights groups claim

Beware of Automated Hiring It won’t end employment discrimination. In fact, it could make it worse.

Some interesting ideas to reduce the risks of bias and discrimination:

Algorithms make many important decisions for us, like our creditworthiness, best romantic prospects and whether we are qualified for a job. Employers are increasingly turning to automated hiring platforms, believing they’re both more convenient and less biased than humans. However, as I describe in a new paper, this is misguided.

In the past, a job applicant could walk into a clothing store, fill out an application, and even hand it straight to the hiring manager. Nowadays, her application must make it through an obstacle course of online hiring algorithms before it might be considered. This is especially true for low-wage and hourly workers.

The situation applies to white-collar jobs too. People applying to be summer interns and first-year analysts at Goldman Sachs have their résumés digitally scanned for keywords that can predict success at the company. And the company has now embracedautomated interviewing.

Automated hiring can create a closed loop system. Advertisements created by algorithms encourage certain people to send in their résumés. After the résumés have undergone automated culling, a lucky few are hired and then subjected to automated evaluation, the results of which are looped back to establish criteria for future job advertisements and selections. This system operates with no transparency or accountability built in to check that the criteria are fair to all job applicants.

The language gives it away: How an algorithm can help us detect fake news

Interesting:

Have you ever read something online and shared it among your networks, only to find out it was false?

As a software engineer and computational linguist who spends most of her work and even leisure hours in front of a computer screen, I am concerned about what I read online. In the age of social media, many of us consume unreliable news sources. We’re exposed to a wild flow of information in our social networks — especially if we spend a lot of time scanning our friends’ random posts on Twitter and Facebook.

My colleagues and I at the Discourse Processing Lab at Simon Fraser University have conducted research on the linguistic characteristics of fake news.

The effects of fake news

A study in the United Kingdom found that about two-thirds of the adults surveyed regularly read news on Facebook, and that half of those had the experience of initially believing a fake news story. Another study, conducted by researchers at the Massachusetts Institute of Technology, focused on the cognitive aspects of exposure to fake news and found that, on average, newsreaders believe a false news headline at least 20 percent of the time.

False stories are now spreading 10 times faster than real news and the problem of fake news seriously threatens our society.

For example, during the 2016 election in the United States, an astounding number of U.S. citizens believed and shared a patently false conspiracy claiming that Hilary Clinton was connected to a human trafficking ring run out of a pizza restaurant. The owner of the restaurant received death threats, and one believer showed up in the restaurant with a gun. This — and a number of other fake news stories distributed during the election season — had an undeniable impact on people’s votes.

 

It’s often difficult to find the origin of a story after partisan groups, social media bots and friends of friends have shared it thousands of times. Fact-checking websites such as Snopes and Buzzfeed can only address a small portion of the most popular rumors.

The technology behind the internet and social media has enabled this spread of misinformation; maybe it’s time to ask what this technology has to offer in addressing the problem.

In an interview, Hilary Clinton discusses ‘Pizzagate’ and the problem of fake news online.

Giveaways in writing style

Recent advances in machine learning have made it possible for computers to instantaneously complete tasks that would have taken humans much longer. For example, there are computer programs that help police identify criminal faces in a matter of seconds. This kind of artificial intelligence trains algorithms to classify, detect and make decisions.

When machine learning is applied to natural language processing, it is possible to build text classification systems that recognize one type of text from another.

During the past few years, natural language processing scientists have become more active in building algorithms to detect misinformation; this helps us to understand the characteristics of fake news and develop technology to help readers.

One approach finds relevant sources of information, assigns each source a credibility score and then integrates them to confirm or debunk a given claim. This approach is heavily dependent on tracking down the original source of news and scoring its credibility based on a variety of factors.

A second approach examines the writing style of a news article rather than its origin. The linguistic characteristics of a written piece can tell us a lot about the authors and their motives. For example, specific words and phrases tend to occur more frequently in a deceptive text compared to one written honestly.

Spotting fake news

Our research identifies linguistic characteristics to detect fake news using machine learning and natural language processing technology. Our analysis of a large collection of fact-checked news articles on a variety of topics shows that, on average, fake news articles use more expressions that are common in hate speech, as well as words related to sex, death and anxiety. Genuine news, on the other hand, contains a larger proportion of words related to work (business) and money (economy).

This suggests that a stylistic approach combined with machine learning might be useful in detecting suspicious news.

Our fake news detector is built based on linguistic characteristics extracted from a large body of news articles. It takes a piece of text and shows how similar it is to the fake news and real news items that it has seen before. (Try it out!)

The main challenge, however, is to build a system that can handle the vast variety of news topics and the quick change of headlines online, because computer algorithms learn from samples and if these samples are not sufficiently representative of online news, the model’s predictions would not be reliable.

One option is to have human experts collect and label a large quantity of fake and real news articles. This data enables a machine-learning algorithm to find common features that keep occurring in each collection regardless of other varieties. Ultimately, the algorithm will be able to distinguish with confidence between previously unseen real or fake news articles.

Source: The language gives it away: How an algorithm can help us detect fake news

Will Your Job Still Exist In 2030?

More on the expected impact of automation and AI:

Automation is already here. Robots helped build your car and pack your latest online shopping order. A chatbot might help you figure out your credit card balance. A computer program might scan and process your résumé when you apply for work.

What will work in America look like a decade from now? A team of economists at the McKinsey Global Institute set out to figure it out in a new report out Thursday.

The research finds automation widening the gap between urban and rural areas and dramatically affecting people who didn’t go to college or didn’t finish high school. It also projects some occupations poised for massive growth or growing enough to offset displaced jobs.

Below are some of the key takeaways from McKinsey’s forecast.

Most jobs will change; some will decline“Intelligent machines are going to become more prevalent in every business. All of our jobs are going to change,” said Susan Lund, co-author of the report. Almost 40% of U.S. jobs are in occupations that are likely to shrink — though not necessarily disappear — by 2030, the researchers found.

Employing almost 21 million Americans, office support is by far the most common U.S. occupation that’s most at risk of losing jobs to digital services, according to McKinsey. Food service is another heavily affected category, as hotel, fast-food and other kitchens automate the work of cooks, dishwashers and others.

At the same time, “the economy is adding jobs that make use of new technologies,” McKinsey economists wrote. Those jobs include software developers and information security specialists — who are constantly in short supply — but also solar panel installers and wind turbine technicians.

Health care jobs, including hearing aid specialists and home health aides, will stay in high demand for the next decade, as baby boomers age. McKinsey also forecast growth for jobs that tap into human creativity or “socioemotional skills” or provide personal service for the wealthy, like interior designers, psychologists, massage therapists, dietitians and landscape architects.

In some occupations, even as jobs disappear, new ones might offset the losses. For example, digital assistants might replace counter attendants and clerks who help with rentals, but more workers might be needed to help shoppers in stores or staff distribution centers, McKinsey economists wrote.

Similarly, enough new jobs will be created in transportation or customer service and sales to offset ones lost by 2030.

Employers and communities could do more to match workers in waning fields to other compatible jobs with less risk of automation. For instance, 900,000 bookkeepers, accountants and auditing clerks nationwide might see their jobs phased out but could be retrained to become loan officers, claims adjusters or insurance underwriters, the McKinsey report said.

Automation is likely to continue widening the gap between job growth in urban and rural areas

By 2030, the majority of job growth may be concentrated in just 25 megacities and their peripheries, while large swaths of the country see slower job creation and even lose jobs, the researchers found. This gap has already widened in the past decade, as Federal Reserve Chairman Jerome Powell noted in his remarks on Wednesday.

Source: Will Your Job Still Exist In 2030?

Facial Expression Analysis Can Help Overcome Racial Bias In The Assessment Of Advertising Effectiveness

Interesting. The advertisers are always ahead of the rest of us….:

There has been significant coverage of bias problems in the use of machine learning in the analysis of people. There has also been pushback against the use of facial recognition because of both bias and inaccuracy. However, a more narrow approach to recognition, one focused on recognition emotions rather than identification, can address marketing challenges. Sentiment analysis by survey is one thing, but tracking human facial responses can significantly improve accuracy of the analysis.

The Brookings Institute points to a projection that the US will become a majority-minority nation by 2045. That means that the non-white population will be over 50% of the population. Even before then, the growing demographic shift means that the non-white population has become a significant part of the consumer market. In this multicultural society, it’s important to know if messages work across those cultures. Today’s marketing needs are much more detailed and subtle than the famous example of the Chevy Nova not selling in Latin America because “no va” means “no go” in Spanish.

It’s also important to understand not only the growth of the multicultural markets, but also what they mean in pure dollars. The following chart from the Collage Group shows that the 2017 revenues from the three largest minority segments are similar to the revenues of entire nations.

It would be foolish for companies to ignore these already large and continually growing segments. While there’s the obvious need to be more inclusive in the images, in particular the people, appearing in ads, the picture is only part of the equation. The right words must also be used to interest different demographics. Of course, that a marketing team thinks it has been more inclusive doesn’t make it so. Just as with other aspects of marketing, these messages must be tested.

Companies have begun to look at vision AI for more than the much reported on facial recognition, that of identifying people. While social media and surveys can catch some sentiment, analysis of facial features is even more detailed. That identification is also an easier AI problem than that of full facial identification. Identifying basic facial features such as the mouth and the eyes, then tracking changes based on watching or reading an advertisement can catch not only a smile, but the “strength” of that smile. Other types of sentiment capture can also be scaled.

Then, without having to identify the individual people, information about their demographics can build a picture of how sentiment varies between groups of people. For instance, the same ad can easily get a different typical reaction from white, middle aged women, then from older black men, and from that of East Asian teenagers. With social media polarizing and fragmenting many attitudes, it’s important to understand how marketing messages are received through the target audiences.

The use of AI to rapidly provide feedback on sentiment analysis will help advertisers to better tune messages, whether aiming at a general message that attracts an audience across the US marketing landscape, or finding appropriate focused messages to attract specific demographics. One example of marketers leveraging AI in this arena is Collage Group. They are a market research firm which has helped companies to better understand and improve messaging to minority communities. Collage Group has recently rolled out AdRate, a process for evaluating ads that integrates AI vision to analysis sentiment of the viewers.

“Companies have come to understand the growing multicultural nature of the US consumer market,” said David Wellisch, CEO, Collage Group. “Artificial intelligence is improving Collage Group’s ability to help B2C companies understand the different reactions in varied communities and then adapt their to the best effect.”

While questions of accuracy and ethics in the use of facial recognition will continue in many areas of business, the opportunity to better message to the diversity of the market is a clear benefit. Visual AI to enhance the accuracy of sentiment analysis is clearly a segment that will grow.

Source: Facial Expression Analysis Can Help Overcome Racial Bias In The Assessment Of Advertising Effectiveness

San Francisco Is Right: Facial Recognition Must Be Put On Hold

Good analysis by Manjoo:

What are we going to do about all the cameras? The question keeps me up at night, in something like terror.

Cameras are the defining technological advance of our age. They are the keys to our smartphones, the eyes of tomorrow’s autonomous drones and the FOMO engines that drive Facebook, Instagram, TikTok, Snapchat and Pornhub. Cheap, ubiquitous, viral photography has fed social movements like Black Lives Matter, but cameras are already prompting more problems than we know what to do with — revenge porn, live-streamed terrorism, YouTube reactionaries and other photographic ills.

And cameras aren’t done. They keep getting cheaper and — in ways both amazing and alarming — they are getting smarter. Advances in computer vision are giving machines the ability to distinguish and track faces, to make guesses about people’s behaviors and intentions, and to comprehend and navigate threats in the physical environment. In China, smart cameras sit at the foundation of an all-encompassing surveillance totalitarianism unprecedented in human history. In the West, intelligent cameras are now being sold as cheap solutions to nearly every private and public woe, from catching cheating spouses and package thieves to preventing school shootings and immigration violations. I suspect these and more uses will take off, because in my years of covering tech, I’ve gleaned one ironclad axiom about society: If you put a camera in it, it will sell.

That’s why I worry that we’re stumbling dumbly into a surveillance state. And it’s why I think the only reasonable thing to do about smart cameras now is to put a stop to them.

This week, San Francisco’s board of supervisors voted to ban the use of facial-recognition technology by the city’s police and other agencies. Oakland and Berkeley are also considering bans, as is the city of Somerville, Mass. I’m hoping for a cascade. States, cities and the federal government should impose an immediate moratorium on facial recognition, especially its use by law-enforcement agencies. We might still decide, at a later time, to give ourselves over to cameras everywhere. But let’s not jump into an all-seeing future without understanding the risks at hand.

What are the risks? Two new reports by Clare Garvie, a researcher who studies facial recognition at Georgetown Law, brought the dangers home for me. In one report — written with Laura Moy, executive director of Georgetown Law’s Center on Privacy & Technology — Ms. Garvie uncovered municipal contracts indicating that law enforcement agencies in Chicago, Detroit and several other cities are moving quickly, and with little public notice, to install Chinese-style “real time” facial recognition systems.

In Detroit, the researchers discovered that the city signed a $1 million deal with DataWorks Plus, a facial recognition vendor, for software that allows for continuous screening of hundreds of private and public cameras set up around the city — in gas stations, fast-food restaurants, churches, hotels, clinics, addiction treatment centers, affordable-housing apartments and schools. Faces caught by the cameras can be searched against Michigan’s driver’s license photo database. Researchers also obtained the Detroit Police Department’s rules governing how officers can use the system. The rules are broad, allowing police to scan faces “on live or recorded video” for a wide variety of reasons, including to “investigate and/or corroborate tips and leads.” In a letter to Ms. Garvie, James E. Craig, Detroit’s police chief, disputed any “Orwellian activities,” adding that he took “great umbrage” at the suggestion that the police would “violate the rights of law-abiding citizens.”

I’m less optimistic, and so is Ms. Garvie. “Face recognition gives law enforcement a unique ability that they’ve never had before,” Ms. Garvie told me. “That’s the ability to conduct biometric surveillance — the ability to see not just what is happening on the ground but who is doing it. This has never been possible before. We’ve never been able to take mass fingerprint scans of a group of people in secret. We’ve never been able to do that with DNA. Now we can with face scans.”

That ability alters how we should think about privacy in public spaces. It has chilling implications for speech and assembly protected by the First Amendment; it means that the police can watch who participates in protests against the police and keep tabs on them afterward.

In fact, this is already happening. In 2015, when protests erupted in Baltimore over the death of Freddie Gray while in police custody, the Baltimore County Police Department used facial recognition softwareto find people in the crowd who had outstanding warrants — arresting them immediately, in the name of public safety.

Eyes On Detroit

Detroit’s facial recognition operation taps into high-definition cameras set up around the city under a program called Project Green Light Detroit. Participating businesses send the Detroit Police Department a live feed from their indoor and outdoor cameras. In exchange, they receive “special police attention,” according to the initiative’s website.

Source: Detroit Police Department; Open Street Map | By The New York Times

But there’s another wrinkle in the debate over facial recognition. In a second report, Ms. Garvie found that for all their alleged power, face-scanning systems are being used by the police in a rushed, sloppy way that should call into question their results.

Here’s one of the many crazy stories in Ms. Garvie’s report: In the spring of 2017, a man was caught on a security camera stealing beer from a CVS store in New York. But the camera didn’t get a good shot of the man, and the city’s face-scanning system returned no match.

The police, however, were undeterred. A detective in the New York Police Department’s facial recognition department thought the man in the pixelated CVS video looked like the actor Woody Harrelson. So the detective went to Google Images, got a picture of the actor and ran hisface through the face scanner. That produced a match, and the law made its move. A man was arrested for the crime not because he looked like the guy caught on tape but because Woody Harrelson did.

The robot revolution will be worse for men

Interesting long read and analysis:

Demographics will determine who gets hit worst by automation. Policy will help curb the damage.

The robots will someday take our jobs. But not all our jobs, and we don’t really know how many. Nor do we understand which jobs will be eliminated and which will be transitioned into what some say will be better, less tedious work.

What we do know is that automation and artificial intelligence will affect Americans unevenly, according to data from McKinsey and the 2016 US Census that was analyzed by the Brookings think tank.

Young people — especially those in rural areas or who are underrepresented minorities — will have a greater likelihood of having their jobs replaced by automation. Meanwhile, older, more educated white people living in big cities are more likely to maintain their coveted positions, either because their jobs are irreplaceable or because they’re needed in new jobs alongside our robot overlords.

The Brookings study also warns that automation will exacerbate existing social inequalities along certain geographic and demographic lines, because it will likely eliminate many lower- and middle-skill jobs considered stepping stones to more advanced careers. These jobs losses will be in concentrated in rural areas, particularly the swath of America between the coasts.

However, at least in the case of gender, it’s the men, for once, who will be getting the short end of the stick. Jobs traditionally held by men have a higher “average automation potential” than those held by women, meaning that a greater share of those tasks could be automated with current technology, according to Brookings. That’s because the occupations men are more likely to hold tend to be more manual and more easily replaced by machines and artificial intelligence.

Of course, the real point here is that people of all stripes face employment disruption as new technologies are able to do many of our tasks faster, more efficiently, and more precisely than mere mortals. The implications of this unemployment upheaval are far-reaching and raise many questions: How will people transition to the jobs of the future? What will those jobs be? Is it possible to mitigate the polarizing effects automation will have on our already-stratified society of haves and have-nots?

A recent McKinsey report estimated that by 2030, up to one-third of work activities could be displaced by automation, meaning a large portion of the populace will have to make changes in how they work and support themselves.

“This anger we see among many people across our country feeling like they’re being left behind from the American dream, this report highlights that many of these same people are in the crosshairs of the impact of automation,” said Alastair Fitzpayne, executive director of the Future of Work Initiative at the Aspen Institute.

“Without policy intervention, the problems we see in our economy in terms of wage stagnation, labor participation, alarming levels of growth in low-wage jobs — those problems are likely to get worse, not better,” Fitzpayne told Recode. “Tech has a history that isn’t only negative if you look over the last 150 years. It can improve economic growth, it can create new jobs, it can boost people’s incomes, but you have to make sure the mechanisms are in place for that growth to be inclusive.”

Before we look at potential solutions, here are six charts that break down which groups are going to be affected most by the oncoming automation — and which have a better chance of surviving the robot apocalypse:

Occupation

The type of job you have largely affects your likelihood of being replaced by a machine. Jobs that require precision and repetition — food prep and manufacturing, for example — can be automated much more easily. Jobs that require creativity and critical thinking, like analysts and teachers, can’t as easily be recreated by machines. You can drill down further into which jobs fall under each job type here.

Education

People’s level of education greatly affects the types of work they are eligible for, so education and occupation are closely linked. Less education will more likely land a person in a more automatable job, while more education means more job options.

Age

Younger people are less likely to have attained higher degrees than older people; they’re also more likely to be in entry-level jobs that don’t require as much variation or decision-making as they might have later in life. Therefore, young people are more likely to be employed in occupations that are at risk of automation.

Race

The robot revolution will also increase racial inequality, as underrepresented minorities are more likely to hold jobs with tasks that could be automated — like food service, office administration, and agriculture.

Gender

Men, who have always been more likely to have better jobs and pay than women, also might be the first to have their jobs usurped. That’s because men tend to over-index in production, transportation, and construction jobs — all occupational groups that have tasks with above-average automation exposure. Women, meanwhile, are overrepresented in occupations related to human interaction, like health care and education — jobs that largely require human labor. Women are also now more likely to attain higher education degrees than men, meaning their jobs could be somewhat safer from being usurped by automation.

Location

Heartland states and rural areas — places that have large shares of employment in routine-intensive occupations like those found in the manufacturing, transportation, and extraction industries — contain a disproportionate share of occupations whose tasks are highly automatable. Small metro areas are also highly susceptible to job automation, though places with universities tend to be an exception. Cities — especially ones that are tech-focused and contain a highly educated populace, like New York; San Jose, California; and Chapel Hill, North Carolina — have the lowest automation potential of all.

See how your county could fare on the map below — the darker purple colors represent higher potential for automation:

Note that in none of the charts above are the percentages of tasks that could be automated very small — in most cases, the Brookings study estimates, at least 20 percent of any given demographic will see changes to their tasks due to automation. Of course, none of this means the end of work for any one group, but rather a transition in the way we work that won’t be felt equally.

“The fact that some of the groups struggling most now are among the groups that may face most challenges is a sobering thought,” said Mark Muro, a senior fellow at Brookings’s Metropolitan Policy Program.

In the worst-case scenario, automation will cause unemployment in the US to soar and exacerbate existing social divides. Depending on the estimate, anywhere from 3 million to 80 million people in the US could lose their jobs, so the implications could be dire.

“The Mad Max thing is possible, maybe not here but the impact on developing countries could be a lot worse as there was less stability to begin with,” said Martin Ford, author of Rise of the Robots and Architects of Intelligence. “Ultimately, it depends on the choices we make, what we do, how we can adapt.”

Fortunately, there are a number of potential solutions. The Brookings study and others lay out ways to mitigate job loss, and maybe even make the jobs of the future better and more attainable. The hardest part will be getting the government and private sector to agree on and pay for them. The Brookings policy recommendations include:

  • Create a universal adjustment benefit to laid-off workers. This involves offering career counseling, education, and training in new, relevant skills, and giving displaced workers financial support while they work on getting a new job. But as we know from the first manufacturing revolution, it’s difficult if not impossible to get government and corporations on board with aiding and reeducating displaced low-skilled workers. Indeed, many cities across the Rust Belt have yet to recover from the automation of car and steel plants in the last century. Government universal adjustment programs, which vary in cost based on their size and scope, provide a template but have had their own failings. Some suggest a carbon taxcould be a way to create billions of dollars in revenue for universal benefits or even universal basic income. Additionally, taxing income as opposed to labor — which could become scarcer with automation — provides other ways to fund universal benefits.
  • Maintain a full-employment economy. A focus on creating new jobs through subsidized employment programs will help create jobs for all who want them. Being employed will cushion some of the blow associated with transitioning jobs. Progressive Democrats’ proposed Green New Deal, which would create jobs geared toward lessening the United States’ dependence on fossil fuels, could be one way of getting to full employment. Brookings also recommends a federal monetary policy that prioritizes full employment over fighting inflation — a feasible action, but one that would require a meaningful change to the fed’s longstanding priorities.
  • Introduce portable benefits programs. This would give workers access to traditional employment benefits like health care, regardless of if or where they’re employed. If people are taken care of in the meantime, some of the stress of transitioning to new jobs would be lessened. These benefits also allow the possibility of part-time jobs or gig work — something that has lately become more of a necessity for many Americans. Currently, half of Americans get their health care through their jobs, and doctors and politicians have historically fought against government-run systems. The concept of portable benefits has recently been popular among freelance unions as well as among contract workers employed in gig economy jobs like Uber.
  • Pay special attention to communities that are hardest-hit. As we know from the charts above, some parts of America will have it way worse than others. But there are already a number of programs in place that provide regional protection for at-risk communities that could be expanded upon to deal with disruption from automation. The Department of Defense already does this on a smaller scale, with programs to help communities adjust after base closures or other program cancellations. Automation aid efforts would provide a variety of support, including grants and project management, as well as funding to convert facilities into new uses. Additionally, “Opportunity Zones” in the tax code — popular among the tech set — give companies tax breaks for investing in low-income areas. These investments in turn create jobs and stimulate spending in areas where it’s most needed.
  • Increased investment in AI, automation, and related technology. This may seem counterintuitive, seeing as automation is causing many of these problems in the first place, but Brookings believes that embracing this new tech — not erecting barriers to the inevitable — will generate the economic productivity needed to increase both standards of living and jobs outside of those that will be automated. “We are not vilifying these technologies; we are calling attention to positive side effects,” Brookings’s Muro said. “These technologies will be integral in boosting American productivity, which is dragging.”

None of these solutions, of course, is a silver bullet, but in conjunction, they could help mitigate some of the pain Americans will face from increased automation — if we act soon. Additionally, many of these ideas currently seem rather progressive, so they could be difficult to implement in a Republican-led government.

“I’m a long-run optimist. I think we will work it out. We have to — we have no choice,” Ford told Recode, emphasizing that humanity also stands to gain huge benefits from using AI and robotics to solve our biggest problems, like climate change and disease.

“The short term, though, could be tough — I worry about our ability to react in that time frame,” Ford said, especial given the current political climate. “But there comes a point when the cost of not adapting exceeds the cost of change.”

Source: The robot revolution will be worse for men

The Hidden Automation Agenda of the Davos Elite

Clarity and likely greater impact on the labour force needed and immigration levels:

They’ll never admit it in public, but many of your bosses want machines to replace you as soon as possible.

I know this because, for the past week, I’ve been mingling with corporate executives at the World Economic Forum’s annual meeting in Davos. And I’ve noticed that their answers to questions about automation depend very much on who is listening.

In public, many executives wring their hands over the negative consequences that artificial intelligence and automation could have for workers. They take part in panel discussions about building “human-centered A.I.” for the “Fourth Industrial Revolution” — Davos-speak for the corporate adoption of machine learning and other advanced technology — and talk about the need to provide a safety net for people who lose their jobs as a result of automation.

But in private settings, including meetings with the leaders of the many consulting and technology firms whose pop-up storefronts line the Davos Promenade, these executives tell a different story: They are racing to automate their own work forces to stay ahead of the competition, with little regard for the impact on workers.

All over the world, executives are spending billions of dollars to transform their businesses into lean, digitized, highly automated operations. They crave the fat profit margins automation can deliver, and they see A.I. as a golden ticket to savings, perhaps by letting them whittle departments with thousands of workers down to just a few dozen.

“People are looking to achieve very big numbers,” said Mohit Joshi, the president of Infosys, a technology and consulting firm that helps other businesses automate their operations. “Earlier they had incremental, 5 to 10 percent goals in reducing their work force. Now they’re saying, ‘Why can’t we do it with 1 percent of the people we have?’”

Few American executives will admit wanting to get rid of human workers, a taboo in today’s age of inequality. So they’ve come up with a long list of buzzwords and euphemisms to disguise their intent. Workers aren’t being replaced by machines, they’re being “released” from onerous, repetitive tasks. Companies aren’t laying off workers, they’re “undergoing digital transformation.”

A 2017 survey by Deloitte found that 53 percent of companies had already started to use machines to perform tasks previously done by humans. The figure is expected to climb to 72 percent by next year.

The corporate elite’s A.I. obsession has been lucrative for firms that specialize in “robotic process automation,” or R.P.A. Infosys, which is based in India, reported a 33 percent increase in year-over-year revenue in its digital division. IBM’s “cognitive solutions” unit, which uses A.I. to help businesses increase efficiency, has become the company’s second-largest division, posting $5.5 billion in revenue last quarter. The investment bank UBS projects that the artificial intelligence industry could be worth as much as $180 billion by next year.

Kai-Fu Lee, the author of “AI Superpowers” and a longtime technology executive, predicts that artificial intelligence will eliminate 40 percent of the world’s jobs within 15 years. In an interview, he said that chief executives were under enormous pressure from shareholders and boards to maximize short-term profits, and that the rapid shift toward automation was the inevitable result.

The Milwaukee offices of the Taiwanese electronics maker Foxconn, whose chairman has said he plans to replace 80 percent of the company’s workers with robots in five to 10 years.CreditLauren Justice for The New York Times

“They always say it’s more than the stock price,” he said. “But in the end, if you screw up, you get fired.”

Other experts have predicted that A.I. will create more new jobs than it destroys, and that job losses caused by automation will probably not be catastrophic. They point out that some automation helps workers by improving productivity and freeing them to focus on creative tasks over routine ones.

But at a time of political unrest and anti-elite movements on the progressive left and the nationalist right, it’s probably not surprising that all of this automation is happening quietly, out of public view. In Davos this week, several executives declined to say how much money they had saved by automating jobs previously done by humans. And none were willing to say publicly that replacing human workers is their ultimate goal.

“That’s the great dichotomy,” said Ben Pring, the director of the Center for the Future of Work at Cognizant, a technology services firm. “On one hand,” he said, profit-minded executives “absolutely want to automate as much as they can.”

“On the other hand,” he added, “they’re facing a backlash in civic society.”

For an unvarnished view of how some American leaders talk about automation in private, you have to listen to their counterparts in Asia, who often make no attempt to hide their aims. Terry Gou, the chairman of the Taiwanese electronics manufacturer Foxconn, has said the company plans to replace 80 percent of its workers with robots in the next five to 10 years. Richard Liu, the founder of the Chinese e-commerce company JD.com, said at a business conferencelast year that “I hope my company would be 100 percent automation someday.”

One common argument made by executives is that workers whose jobs are eliminated by automation can be “reskilled” to perform other jobs in an organization. They offer examples like Accenture, which claimed in 2017 to have replaced 17,000 back-office processing jobs without layoffs, by training employees to work elsewhere in the company. In a letter to shareholders last year, Jeff Bezos, Amazon’s chief executive, said that more than 16,000 Amazon warehouse workers had received training in high-demand fields like nursing and aircraft mechanics, with the company covering 95 percent of their expenses.

But these programs may be the exception that proves the rule. There are plenty of stories of successful reskilling — optimists often cite a program in Kentucky that trained a small group of former coal miners to become computer programmers — but there is little evidence that it works at scale. A report by the World Economic Forum this month estimated that of the 1.37 million workers who are projected to be fully displaced by automation in the next decade, only one in four can be profitably reskilled by private-sector programs. The rest, presumably, will need to fend for themselves or rely on government assistance.

In Davos, executives tend to speak about automation as a natural phenomenon over which they have no control, like hurricanes or heat waves. They claim that if they don’t automate jobs as quickly as possible, their competitors will.

“They will be disrupted if they don’t,” said Katy George, a senior partner at the consulting firm McKinsey & Company.

Automating work is a choice, of course, one made harder by the demands of shareholders, but it is still a choice. And even if some degree of unemployment caused by automation is inevitable, these executives can choose how the gains from automation and A.I. are distributed, and whether to give the excess profits they reap as a result to workers, or hoard it for themselves and their shareholders.

The choices made by the Davos elite — and the pressure applied on them to act in workers’ interests rather than their own — will determine whether A.I. is used as a tool for increasing productivity or for inflicting pain.

“The choice isn’t between automation and non-automation,” said Erik Brynjolfsson, the director of M.I.T.’s Initiative on the Digital Economy. “It’s between whether you use the technology in a way that creates shared prosperity, or more concentration of wealth.”

Amazon Is Pushing Facial Technology That a Study Says Could Be Biased

Of note. These kinds of studies are important to expose the bias inherent in some corporate facial recognition systems:

Over the last two years, Amazon has aggressively marketed its facial recognition technology to police departments and federal agencies as a service to help law enforcement identify suspects more quickly. It has done so as another tech giant, Microsoft, has called on Congress to regulate the technology, arguing that it is too risky for companies to oversee on their own.

Now a new study from researchers at the M.I.T. Media Lab has found that Amazon’s system, Rekognition, had much more difficulty in telling the gender of female faces and of darker-skinned faces in photos than similar services from IBM and Microsoft. The results raise questions about potential bias that could hamper Amazon’s drive to popularize the technology.

In the study, published Thursday, Rekognition made no errors in recognizing the gender of lighter-skinned men. But it misclassified women as men 19 percent of the time, the researchers said, and mistook darker-skinned women for men 31 percent of the time. Microsoft’s technology mistook darker-skinned women for men just 1.5 percent of the time.

A study published a year ago found similar problems in the programs built by IBM, Microsoft and Megvii, an artificial intelligence company in China known as Face++. Those results set off an outcry that was amplified when a co-author of the study, Joy Buolamwini, posted YouTube videos showing the technology misclassifying famous African-American women, like Michelle Obama, as men.

The companies in last year’s report all reacted by quickly releasing more accurate technology. For the latest study, Ms. Buolamwini said, she sent a letter with some preliminary results to Amazon seven months ago. But she said that she hadn’t heard back from Amazon, and that when she and a co-author retested the company’s product a couple of months later, it had not improved.

Matt Wood, general manager of artificial intelligence at Amazon Web Services, said the researchers had examined facial analysis — a technology that can spot features such as mustaches or expressions such as smiles — and not facial recognition, a technology that can match faces in photos or video stills to identify individuals. Amazon markets both services.

“It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case — including law enforcement — based on results obtained using facial analysis,” Dr. Wood said in a statement. He added that the researchers had not tested the latest version of Rekognition, which was updated in November.

Amazon said that in recent internal tests using an updated version of its service, the company found no difference in accuracy in classifying gender across all ethnicities.

The M.I.T. researchers used these and other photos to study the accuracy of facial technology in identifying gender.

With advancements in artificial intelligence, facial technologies — services that can be used to identify people in crowds, analyze their emotions, or detect their age and facial characteristics — are proliferating. Now, as companies begin to market these services more aggressively for uses like policing and vetting job candidates, they have emerged as a lightning rod in the debate about whether and how Congress should regulate powerful emerging technologies.

The new study, scheduled to be presented Monday at an artificial intelligence and ethics conference in Honolulu, is sure to inflame that argument.

Proponents see facial recognition as an important advance in helping law enforcement agencies catch criminals and find missing children. Some police departments, and the Federal Bureau of Investigation, have tested Amazon’s product.

But civil liberties experts warn that it can also be used to secretly identify people — potentially chilling Americans’ ability to speak freely or simply go about their business anonymously in public.

Over the last year, Amazon has come under intense scrutiny by federal lawmakers, the American Civil Liberties Union, shareholders, employees and academic researchers for marketing Rekognition to law enforcement agencies. That is partly because, unlike Microsoft, IBM and other tech giants, Amazon has been less willing to publicly discuss concerns.

Amazon, citing customer confidentiality, has also declined to answer questions from federal lawmakers about which government agencies are using Rekognition or how they are using it. The company’s responses have further troubled some federal lawmakers.

“Not only do I want to see them address our concerns with the sense of urgency it deserves,” said Representative Jimmy Gomez, a California Democrat who has been investigating Amazon’s facial recognition practices. “But I also want to know if law enforcement is using it in ways that violate civil liberties, and what — if any — protections Amazon has built into the technology to protect the rights of our constituents.”

In a letter last month to Mr. Gomez, Amazon said Rekognition customers must abide by Amazon’s policies, which require them to comply with civil rights and other laws. But the company said that for privacy reasons it did not audit customers, giving it little insight into how its product is being used.

The study published last year reported that Microsoft had a perfect score in identifying the gender of lighter-skinned men in a photo database, but that it misclassified darker-skinned women as men about one in five times. IBM and Face++ had an even higher error rate, each misclassifying the gender of darker-skinned women about one in three times.

Ms. Buolamwini said she had developed her methodology with the idea of harnessing public pressure, and market competition, to push companies to fix biases in their software that could pose serious risks to people.

Ms. Buolamwini, who had done similar tests last year, conducted another round to learn whether industry practices had changed, she said.CreditTony Luong for The New York Times

“One of the things we were trying to explore with the paper was how to galvanize action,” Ms. Buolamwini said.

Immediately after the study came out last year, IBM published a blog post, “Mitigating Bias in A.I. Models,” citing Ms. Buolamwini’s study. In the post, Ruchir Puri, chief architect at IBM Watson, said IBM had been working for months to reduce bias in its facial recognition system. The company post included test results showing improvements, particularly in classifying the gender of darker-skinned women. Soon after, IBM released a new system that the company said had a tenfold decrease in error rates.

A few months later, Microsoft published its own post, titled “Microsoft improves facial recognition technology to perform well across all skin tones, genders.” In particular, the company said, it had significantly reduced the error rates for female and darker-skinned faces.

Ms. Buolamwini wanted to learn whether the study had changed overall industry practices. So she and a colleague, Deborah Raji, a college student who did an internship at the M.I.T. Media Lab last summer, conducted a new study.

In it, they retested the facial systems of IBM, Microsoft and Face++. They also tested the facial systems of two companies that were not included in the first study: Amazon and Kairos, a start-up in Florida.

The new study found that IBM, Microsoft and Face++ all improved their accuracy in identifying gender.

By contrast, the study reported, Amazon misclassified the gender of darker-skinned females 31 percent of the time, while Kairos had an error rate of 22.5 percent.

Melissa Doval, the chief executive of Kairos, said the company, inspired by Ms. Buolamwini’s work, released a more accurate algorithm in October.

Ms. Buolamwini said the results of her studies raised fundamental questions for society about whether facial technology should not be used in certain situations, such as job interviews, or in products, like drones or police body cameras.

Some federal lawmakers are voicing similar issues.

“Technology like Amazon’s Rekognition should be used if and only if it is imbued with American values like the right to privacy and equal protection,” said Senator Edward J. Markey, a Massachusetts Democrat who has been investigating Amazon’s facial recognition practices. “I do not think that standard is currently being met.”

Source: Amazon Is Pushing Facial Technology That a Study Says Could Be Biased