Microsoft Works on Fixing Its Racially-Biased Facial Recognition Software

This is something many people would never think of – that is unless you’re a person of color. Facial recognition software, or at least Microsoft’s software, was programmed with mostly Caucasian male faces. That means it has a more difficult time recognizing darker faces, particularly females. But Microsoft is announcing they’ve improved that.

Earlier this year Microsoft’s Face API, based on Azure, picked up criticism in a research paper. They were looking at the error rate of attempts to identify the gender of people of color and found that it was as high as 20.8 percent, especially when trying to identify women with darker skin tones. Yet with “lighter male faces” the error rate was zero percent.

This is because artificial intelligence technology is just that – artificial. It needs to be programmed by people, meaning results are going to be dependent on how good the technology was programmed and the data that was used to do so.

When Microsoft was developing its software, it didn’t have enough images of people with darker skin tones, And this resulted in the higher error rate of people of color, especially of women.

news-microsoft-facial-recognition-white

Racism is an important topic to consider. Microsoft certainly didn’t set out to be racist, but by allowing the software to be programmed primarily with white males, the question is if the programmers were unintentionally showing their own racial bias.

Regardless of why Microsoft ended up with software that was showing the bias of its creators/programmers, it needed to fix it. After they fixed it the company said they were able to reduce the error rate for darker skinned people up to twenty times. For women, regardless of skin tone, the error rates were reduced by nine times.

To get this improved error rate, the Face API team initiated three changes. Of obvious need of being revised and expanded were the training and data for their benchmarks. They focused specifically on skin tone, gender, and age.

“We had conversations about different ways to detect bias and operationalize fairness,” said Hannah Wallach, a senior researcher for Microsoft’s New York research lab and also an expert with regards to fairness, accountability, and transparency in AI systems. “We talked about data collection efforts to diversify the training data. We talked about different strategies to internally test our systems before we deploy them.”

news-microsoft-facial-recognition-dark

Cornelia Carapcea, a principal program manager on the Cognitive Services team said eventually Wallach’s group gave “a more nuanced understanding of bias” and helped her team develop a dataset “that held us accountable across skin tones.”

“If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases,” said Wallach.

This makes complete sense. Whether we like it or not, racism exists in our society. We may not like to think that it does, but sometimes we just can’t deny it. That same society also creates the technology we use, and that means it can also be racially biased. If we want our technology to do better, then we need to do better ourselves.

What do you think about Microsoft creating software that was based on the biases of its developers? Does it change your opinion of the company? How do you think it reflects on society? Add your thoughts in our comments section below.

3 comments

  1. I’m no fan of Microsoft, but rather than “racist” I suspect this was just a case of people working on the facial recognition not realizing it would read darker skin differently than it reads lighter skin. Even simple mistakes and oversights are racism these days, which diminishes cases of true racism.

  2. Another thing that Microsoft didn’t get right on the first try………or the second……..or the third……or whichever.

    However, I would not call this shortcoming of Face API “racist”, although considering today’s hyper-sensitive society and certain people’s eagerness to find evil intent in any actions of others, I am not surprised that the specter of “racism” has reared its ugly head. Had the software been written in an Asian or an African country by local programmers, it would have a high error rate in recognizing Caucasian faces and thus would have been considered just as “racist”.

    Having been a programmer for many years I would attribute this problem to two factors:
    #1) When testing a program, you use the data that is easiest to obtain, in this case yourself and your co-workers. Obviously, the team that wrote this AI was predominantly white males. (The why’s and wherefore’s of the team’s make up are a fodder for another discussion)

    #2) When debugging a program, at first you use a small sample of data (in this case white males) to limit the variables and make sure that the logic works. Once you can’t make the logic fail, you expand the data set (in this case, add other ethnic groups). If the programming team was not diverse enough, then the recognition of some ethnic groups was not sufficiently tested.

    I have never been a fan of Microsoft but in this case I must stick up for them. Their job is to write software, not to be Politically Correct. In order for their facial recognition AI not be considered “racist” its database would have to contain a picture of every face on the planet. Otherwise there will always be a group with a high error rate of recognition.

  3. Racially biased? Give me a break. It’s simply harder, even much harder, for a camera to discern features of a dark face. You know, how cameras don’t take great pictures in low light…

Leave a Comment

Yeah! You've decided to leave a comment. That's fantastic! Check out our comment policy here. Let's have a personal and meaningful conversation.

Sponsored Stories