This article is a follow up of sorts. Six months ago we published news that Microsoft was working on fixing their facial recognition software that was deemed racially biased. Now the same company is asking that facial recognition be regulated to prevent bias. So they were caught with biased software, and now they want the entire industry to be regulated to prevent anyone else from having racially biased software.
The Initial Problem
As we reported in June, Microsoft’s Face API, based on Azure, was criticized in a research paper. The software had a difficult time recognizing people of color and women. The error rate was a high as 20.8 percent, but with “lighter male faces,” there was zero percent error rate.
The reasoning behind this difference in recognition is because artificial intelligence needed to be programmed by people. Results will only be as good as the people who did the programming. The Microsoft programmers didn’t use enough people with darker skin tones or enough women.
Microsoft worked on fixing this by diversifying their training data and began internally testing their systems before they deployed them. They were able to reduce the error rate for darker-skinned people up to twenty times and the error rate for women by nine times.
Pushing for Legislation
It’s six months later and Microsoft is asking governments to pass legislation to require facial-recognition technology be independently tested to ensure that it’s accurate, to prevent bias, and to protect users’ rights.
“The facial recognition genie, so to speak, is just emerging from the bottle,” explained Brad Smith, Microsoft chief counsel, in a blog post.
“Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.”
The company is asking that the results of facial recognition be reviewed by people rather than leaving the task to computers.
“This includes where decisions may create a risk of bodily or emotional harm to a consumer, where there may be implications on human or fundamental rights, or where a consumer’s personal freedom or privacy may be impinged,” he explained.
Additionally, Smith suggested that those using facial recognition need to “recognize that they are not absolved of their obligation to comply with laws prohibiting discrimination against individual consumers or groups of consumers.”
He also wanted to be sure that government use of facial recognition didn’t step on the democratic freedoms and human rights of people.
“We must ensure that the year 2024 doesn’t look like a page from the novel 1984,” he concluded.
Microsoft is right, of course. It’s just interesting that six months ago they didn’t recognize the need to be careful that their software didn’t discriminate, and now not only do they recognize that it, but they also want to be sure no one else can make the same mistakes they did.
Regardless of their situation earlier, is Microsoft right to demand legislation and that facial recognition be regulated? Let us know your thoughts about Microsoft’s request in the comments section below.