Facial Expression Analysis Can Help Overcome Racial Bias In The Assessment Of Advertising Effectiveness

Interesting. The advertisers are always ahead of the rest of us….:

There has been significant coverage of bias problems in the use of machine learning in the analysis of people. There has also been pushback against the use of facial recognition because of both bias and inaccuracy. However, a more narrow approach to recognition, one focused on recognition emotions rather than identification, can address marketing challenges. Sentiment analysis by survey is one thing, but tracking human facial responses can significantly improve accuracy of the analysis.

The Brookings Institute points to a projection that the US will become a majority-minority nation by 2045. That means that the non-white population will be over 50% of the population. Even before then, the growing demographic shift means that the non-white population has become a significant part of the consumer market. In this multicultural society, it’s important to know if messages work across those cultures. Today’s marketing needs are much more detailed and subtle than the famous example of the Chevy Nova not selling in Latin America because “no va” means “no go” in Spanish.

It’s also important to understand not only the growth of the multicultural markets, but also what they mean in pure dollars. The following chart from the Collage Group shows that the 2017 revenues from the three largest minority segments are similar to the revenues of entire nations.

It would be foolish for companies to ignore these already large and continually growing segments. While there’s the obvious need to be more inclusive in the images, in particular the people, appearing in ads, the picture is only part of the equation. The right words must also be used to interest different demographics. Of course, that a marketing team thinks it has been more inclusive doesn’t make it so. Just as with other aspects of marketing, these messages must be tested.

Companies have begun to look at vision AI for more than the much reported on facial recognition, that of identifying people. While social media and surveys can catch some sentiment, analysis of facial features is even more detailed. That identification is also an easier AI problem than that of full facial identification. Identifying basic facial features such as the mouth and the eyes, then tracking changes based on watching or reading an advertisement can catch not only a smile, but the “strength” of that smile. Other types of sentiment capture can also be scaled.

Then, without having to identify the individual people, information about their demographics can build a picture of how sentiment varies between groups of people. For instance, the same ad can easily get a different typical reaction from white, middle aged women, then from older black men, and from that of East Asian teenagers. With social media polarizing and fragmenting many attitudes, it’s important to understand how marketing messages are received through the target audiences.

The use of AI to rapidly provide feedback on sentiment analysis will help advertisers to better tune messages, whether aiming at a general message that attracts an audience across the US marketing landscape, or finding appropriate focused messages to attract specific demographics. One example of marketers leveraging AI in this arena is Collage Group. They are a market research firm which has helped companies to better understand and improve messaging to minority communities. Collage Group has recently rolled out AdRate, a process for evaluating ads that integrates AI vision to analysis sentiment of the viewers.

“Companies have come to understand the growing multicultural nature of the US consumer market,” said David Wellisch, CEO, Collage Group. “Artificial intelligence is improving Collage Group’s ability to help B2C companies understand the different reactions in varied communities and then adapt their to the best effect.”

While questions of accuracy and ethics in the use of facial recognition will continue in many areas of business, the opportunity to better message to the diversity of the market is a clear benefit. Visual AI to enhance the accuracy of sentiment analysis is clearly a segment that will grow.

Source: Facial Expression Analysis Can Help Overcome Racial Bias In The Assessment Of Advertising Effectiveness

Amazon Is Pushing Facial Technology That a Study Says Could Be Biased

Of note. These kinds of studies are important to expose the bias inherent in some corporate facial recognition systems:

Over the last two years, Amazon has aggressively marketed its facial recognition technology to police departments and federal agencies as a service to help law enforcement identify suspects more quickly. It has done so as another tech giant, Microsoft, has called on Congress to regulate the technology, arguing that it is too risky for companies to oversee on their own.

Now a new study from researchers at the M.I.T. Media Lab has found that Amazon’s system, Rekognition, had much more difficulty in telling the gender of female faces and of darker-skinned faces in photos than similar services from IBM and Microsoft. The results raise questions about potential bias that could hamper Amazon’s drive to popularize the technology.

In the study, published Thursday, Rekognition made no errors in recognizing the gender of lighter-skinned men. But it misclassified women as men 19 percent of the time, the researchers said, and mistook darker-skinned women for men 31 percent of the time. Microsoft’s technology mistook darker-skinned women for men just 1.5 percent of the time.

A study published a year ago found similar problems in the programs built by IBM, Microsoft and Megvii, an artificial intelligence company in China known as Face++. Those results set off an outcry that was amplified when a co-author of the study, Joy Buolamwini, posted YouTube videos showing the technology misclassifying famous African-American women, like Michelle Obama, as men.

The companies in last year’s report all reacted by quickly releasing more accurate technology. For the latest study, Ms. Buolamwini said, she sent a letter with some preliminary results to Amazon seven months ago. But she said that she hadn’t heard back from Amazon, and that when she and a co-author retested the company’s product a couple of months later, it had not improved.

Matt Wood, general manager of artificial intelligence at Amazon Web Services, said the researchers had examined facial analysis — a technology that can spot features such as mustaches or expressions such as smiles — and not facial recognition, a technology that can match faces in photos or video stills to identify individuals. Amazon markets both services.

“It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case — including law enforcement — based on results obtained using facial analysis,” Dr. Wood said in a statement. He added that the researchers had not tested the latest version of Rekognition, which was updated in November.

Amazon said that in recent internal tests using an updated version of its service, the company found no difference in accuracy in classifying gender across all ethnicities.

The M.I.T. researchers used these and other photos to study the accuracy of facial technology in identifying gender.

With advancements in artificial intelligence, facial technologies — services that can be used to identify people in crowds, analyze their emotions, or detect their age and facial characteristics — are proliferating. Now, as companies begin to market these services more aggressively for uses like policing and vetting job candidates, they have emerged as a lightning rod in the debate about whether and how Congress should regulate powerful emerging technologies.

The new study, scheduled to be presented Monday at an artificial intelligence and ethics conference in Honolulu, is sure to inflame that argument.

Proponents see facial recognition as an important advance in helping law enforcement agencies catch criminals and find missing children. Some police departments, and the Federal Bureau of Investigation, have tested Amazon’s product.

But civil liberties experts warn that it can also be used to secretly identify people — potentially chilling Americans’ ability to speak freely or simply go about their business anonymously in public.

Over the last year, Amazon has come under intense scrutiny by federal lawmakers, the American Civil Liberties Union, shareholders, employees and academic researchers for marketing Rekognition to law enforcement agencies. That is partly because, unlike Microsoft, IBM and other tech giants, Amazon has been less willing to publicly discuss concerns.

Amazon, citing customer confidentiality, has also declined to answer questions from federal lawmakers about which government agencies are using Rekognition or how they are using it. The company’s responses have further troubled some federal lawmakers.

“Not only do I want to see them address our concerns with the sense of urgency it deserves,” said Representative Jimmy Gomez, a California Democrat who has been investigating Amazon’s facial recognition practices. “But I also want to know if law enforcement is using it in ways that violate civil liberties, and what — if any — protections Amazon has built into the technology to protect the rights of our constituents.”

In a letter last month to Mr. Gomez, Amazon said Rekognition customers must abide by Amazon’s policies, which require them to comply with civil rights and other laws. But the company said that for privacy reasons it did not audit customers, giving it little insight into how its product is being used.

The study published last year reported that Microsoft had a perfect score in identifying the gender of lighter-skinned men in a photo database, but that it misclassified darker-skinned women as men about one in five times. IBM and Face++ had an even higher error rate, each misclassifying the gender of darker-skinned women about one in three times.

Ms. Buolamwini said she had developed her methodology with the idea of harnessing public pressure, and market competition, to push companies to fix biases in their software that could pose serious risks to people.

Ms. Buolamwini, who had done similar tests last year, conducted another round to learn whether industry practices had changed, she said.CreditTony Luong for The New York Times

“One of the things we were trying to explore with the paper was how to galvanize action,” Ms. Buolamwini said.

Immediately after the study came out last year, IBM published a blog post, “Mitigating Bias in A.I. Models,” citing Ms. Buolamwini’s study. In the post, Ruchir Puri, chief architect at IBM Watson, said IBM had been working for months to reduce bias in its facial recognition system. The company post included test results showing improvements, particularly in classifying the gender of darker-skinned women. Soon after, IBM released a new system that the company said had a tenfold decrease in error rates.

A few months later, Microsoft published its own post, titled “Microsoft improves facial recognition technology to perform well across all skin tones, genders.” In particular, the company said, it had significantly reduced the error rates for female and darker-skinned faces.

Ms. Buolamwini wanted to learn whether the study had changed overall industry practices. So she and a colleague, Deborah Raji, a college student who did an internship at the M.I.T. Media Lab last summer, conducted a new study.

In it, they retested the facial systems of IBM, Microsoft and Face++. They also tested the facial systems of two companies that were not included in the first study: Amazon and Kairos, a start-up in Florida.

The new study found that IBM, Microsoft and Face++ all improved their accuracy in identifying gender.

By contrast, the study reported, Amazon misclassified the gender of darker-skinned females 31 percent of the time, while Kairos had an error rate of 22.5 percent.

Melissa Doval, the chief executive of Kairos, said the company, inspired by Ms. Buolamwini’s work, released a more accurate algorithm in October.

Ms. Buolamwini said the results of her studies raised fundamental questions for society about whether facial technology should not be used in certain situations, such as job interviews, or in products, like drones or police body cameras.

Some federal lawmakers are voicing similar issues.

“Technology like Amazon’s Rekognition should be used if and only if it is imbued with American values like the right to privacy and equal protection,” said Senator Edward J. Markey, a Massachusetts Democrat who has been investigating Amazon’s facial recognition practices. “I do not think that standard is currently being met.”

Source: Amazon Is Pushing Facial Technology That a Study Says Could Be Biased

Most engineers are white — and so are the faces they use to train software – Recode

Not terribly surprising but alarming given how much facial recognition is used these days.

While the focus of this article is with respect to Black faces (as it is with the Implicit Association Test), the same issue likely applies to other minority groups.

Welcome any comments from those with experience on how the various face recognition programs in commercial software such as Flicker, Google, Photos etc:

Facial recognition technology is known to struggle to recognize black faces. The underlying reason for this shortcoming runs deeper than you might expect, according to researchers at MIT.

Speaking during a panel discussion on artificial intelligence at the World Economic Forum Annual Meeting this week, MIT Media Lab director Joichi Ito said it likely stems from the fact that most engineers are white.

“The way you get into computers is because your friends are into computers, which is generally white men. So, when you look at the demographic across Silicon Valley you see a lot of white men,” Ito said.

Ito relayed an anecdote about how a graduate researcher in his lab had found that commonly used libraries for facial recognition have trouble reading dark faces.

“These libraries are used in many of the products that you have, and if you’re an African-American person you get in front of it, it won’t recognize your face,” he said.

Libraries are collections of pre-written code developers can share and reuse to save time instead of writing everything from scratch.

Joy Buolamwini, the graduate researcher on the project, told Recode in an email that software she used did not consistently detect her face, and that more analysis is needed to make broader claims about facial recognition technology.

“Given the wide range of skin-tone and facial features that can be considered African-American, more precise terminology and analysis is needed to determine the performance of existing facial detection systems,” she said.

“One of the risks that we have of the lack of diversity in engineers is that it’s not intuitive which questions you should be asking,” Ito said. “And even if you have a design guidelines, some of this stuff is kind of feel decision.”

“Calls for tech inclusion often miss the bias that is embedded in written code,” Buolamwini wrote in a May post on Medium.

Reused code, while convenient, is limited by the training data it uses to learn, she said. In the case of code for facial recognition, the code is limited by the faces included in the training data.

“A lack of diversity in the training set leads to an inability to easily characterize faces that do not fit the normal face derived from the training set,” wrote Buolamwini.

She wrote that to cope with limitations in one project involving facial recognition technology, she had to wear a white mask so that her face could “be detected in a variety of lighting conditions,” she said.

“While this is a temporary solution, we can do better than asking people to change themselves to fit our code. Our task is to create code that can work for people of all types.”

Racial profiling, by a computer? Police facial-ID tech raises civil rights concerns. – The Washington Post

The next frontier of combatting profiling:

The growing use of facial-recognition systems has led to a high-tech form of racial profiling, with African Americans more likely than others to have their images captured, analyzed and reviewed during computerized searches for crime suspects, according to a new report based on records from dozens of police departments.

The report, released Tuesday by the Center for Privacy & Technology at Georgetown University’s law school, found that half of all American adults have their images stored in at least one facial-recognition database that police can search, typically with few restrictions.

The steady expansion of these systems has led to a disproportionate racial impact because African Americans are more likely to be arrested and have mug shots taken, one of the main ways that images end up in police databases. The report also found that criminal databases are rarely “scrubbed” to remove the images of innocent people, nor are facial-recognition systems routinely tested for accuracy, even though some struggle to distinguish among darker-skinned faces.

The combination of these factors means that African Americans are more likely to be singled out as possible suspects in crimes — including ones they did not commit, the report says.

“This is a serious problem, and no one is working to fix it,” said Alvaro M. Bedoya, executive director of the Georgetown Law center that produced the report on facial-recognition technology. “Police departments are talking about it as if it’s race-blind, and it’s just not true.”

The 150-page report, called “The Perpetual Line-Up,” found a rapidly growing patchwork of facial-recognition systems at the federal, state and local level with little regulation and few legal standards. Some databases include mug shots, others driver’s-license photos. Some states, such as Maryland and Pennsylvania, use both as they analyze crime-scene images in search of potential suspects.

At least 117 million Americans have images of their faces in one or more police databases, meaning their resemblance to images taken from crime scenes can become the basis for follow-up by investigators. The FBI ran a pilot program this year in which it could search the State Department’s passport and visa databases for leads in criminal cases. Overall, the Government Accountability Office reported in May, the FBI has had access to 412 million facial images for searches; the faces of some Americans appear several times in these databases.

Source: Racial profiling, by a computer? Police facial-ID tech raises civil rights concerns. – The Washington Post