This is something many people would never think of – that is unless you’re a person of color. Facial recognition software, or at least Microsoft’s software, was programmed with mostly Caucasian male faces. That means it has a more difficult time recognizing darker faces, particularly females. But Microsoft is announcing they’ve improved that.
The Racist Side of Facial Recognition Software
Earlier this year Microsoft’s Face API, based on Azure, picked up criticism in a research paper. They were looking at the error rate of attempts to identify the gender of people of color and found that it was as high as 20.8 percent, especially when trying to identify women with darker skin tones. Yet with “lighter male faces” the error rate was zero percent.
This is because artificial intelligence technology is just that – artificial. It needs to be programmed by people, meaning results are going to be dependent on how good the technology was programmed and the data that was used to do so.
When Microsoft was developing its software, it didn’t have enough images of people with darker skin tones, And this resulted in the higher error rate of people of color, especially of women.
Racism is an important topic to consider. Microsoft certainly didn’t set out to be racist, but by allowing the software to be programmed primarily with white males, the question is if the programmers were unintentionally showing their own racial bias.
Regardless of why Microsoft ended up with software that was showing the bias of its creators/programmers, it needed to fix it. After they fixed it the company said they were able to reduce the error rate for darker skinned people up to twenty times. For women, regardless of skin tone, the error rates were reduced by nine times.
To get this improved error rate, the Face API team initiated three changes. Of obvious need of being revised and expanded were the training and data for their benchmarks. They focused specifically on skin tone, gender, and age.
“We had conversations about different ways to detect bias and operationalize fairness,” said Hannah Wallach, a senior researcher for Microsoft’s New York research lab and also an expert with regards to fairness, accountability, and transparency in AI systems. “We talked about data collection efforts to diversify the training data. We talked about different strategies to internally test our systems before we deploy them.”
Cornelia Carapcea, a principal program manager on the Cognitive Services team said eventually Wallach’s group gave “a more nuanced understanding of bias” and helped her team develop a dataset “that held us accountable across skin tones.”
“If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases,” said Wallach.
This makes complete sense. Whether we like it or not, racism exists in our society. We may not like to think that it does, but sometimes we just can’t deny it. That same society also creates the technology we use, and that means it can also be racially biased. If we want our technology to do better, then we need to do better ourselves.
What do you think about Microsoft creating software that was based on the biases of its developers? Does it change your opinion of the company? How do you think it reflects on society? Add your thoughts in our comments section below.