Microsoft Wants Facial Recognition to Be Regulated to Prevent Bias

This article is a follow up of sorts. Six months ago we published news that Microsoft was working on fixing their facial recognition software that was deemed racially biased. Now the same company is asking that facial recognition be regulated to prevent bias. So they were caught with biased software, and now they want the entire industry to be regulated to prevent anyone else from having racially biased software.

As we reported in June, Microsoft’s Face API, based on Azure, was criticized in a research paper. The software had a difficult time recognizing people of color and women. The error rate was a high as 20.8 percent, but with “lighter male faces,” there was zero percent error rate.

The reasoning behind this difference in recognition is because artificial intelligence needed to be programmed by people. Results will only be as good as the people who did the programming. The Microsoft programmers didn’t use enough people with darker skin tones or enough women.

Microsoft worked on fixing this by diversifying their training data and began internally testing their systems before they deployed them. They were able to reduce the error rate for darker-skinned people up to twenty times and the error rate for women by nine times.

news-microsoft-facial-recognition-legislation-man

It’s six months later and Microsoft is asking governments to pass legislation to require facial-recognition technology be independently tested to ensure that it’s accurate, to prevent bias, and to protect users’ rights.

“The facial recognition genie, so to speak, is just emerging from the bottle,” explained Brad Smith, Microsoft chief counsel, in a blog post.

“Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.”

The company is asking that the results of facial recognition be reviewed by people rather than leaving the task to computers.

“This includes where decisions may create a risk of bodily or emotional harm to a consumer, where there may be implications on human or fundamental rights, or where a consumer’s personal freedom or privacy may be impinged,” he explained.

news-microsoft-facial-recognition-legislation-3-people

Additionally, Smith suggested that those using facial recognition need to “recognize that they are not absolved of their obligation to comply with laws prohibiting discrimination against individual consumers or groups of consumers.”

He also wanted to be sure that government use of facial recognition didn’t step on the democratic freedoms and human rights of people.

“We must ensure that the year 2024 doesn’t look like a page from the novel 1984,” he concluded.

Microsoft is right, of course. It’s just interesting that six months ago they didn’t recognize the need to be careful that their software didn’t discriminate, and now not only do they recognize that it, but they also want to be sure no one else can make the same mistakes they did.

Regardless of their situation earlier, is Microsoft right to demand legislation and that facial recognition be regulated? Let us know your thoughts about Microsoft’s request in the comments section below.

5 comments

  1. “Microsoft is right, of course”
    NO. Microsoft is NOT right. They are just trying to deflect criticism away from themselves. Rightly or wrongly, they got caught with their bias showing. Now MS is trying to appear to be the good guy by pushing for oversight legislation and preemptively accusing other developers of facial recognition software of bias.

    ““Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues.”
    That is a fatuous statement. While there may be some truth to it, it is the hypersensitivity of various demographic groups that exacerbates societal issues much more. Individuals and groups interpret any negative comments about themselves as bias.

    “The company is asking that the results of facial recognition be reviewed by people rather than leaving the task to computers.”
    If Microsoft’s contention is that the bias in facial recognition software is influenced by people, doesn’t it stand to reason that the same bias would exist if people reviewed the results?

    “is Microsoft right to demand legislation and that facial recognition be regulated?”
    Whenever new regulations are introduced, a new fallible, corruptible bureaucracy is created. As Milton Friedman, winner of the Nobel Prize in Economy, said “The government solution to a problem is usual as bad as the problem and very often makes the problem worse.

    • Easy for you to call it “hypersensitivity” if you are not part of a “various demographic group.” If you have not been a minority or biased against, it’s easy to see it as hypersensitivity.

      • And how do you know that I “have not been a minority or biased against”? That is a dangerous assumption. Depending on how you define “minority”, everyone in the US today can be considered to be a part of some minority.

        As far as “hypersensitivity” goes, I’ll give an example.
        In our area we have a school district that in which two thirds of the population are Ultra Orthodox Jews. The other third are African-Americans. The Jews attend yeshivas for their education and the African_Americans go to public schools. The school board, because of their predominance in the district, consists only of the Ultra Orthodox. Because of its makeup, the board directs most of the money in the school budget to fund the yeshivas. Very little money goes towards funding the needs of the public schools, and therefore the minority (in numbers and ethnicity) students. Any attempt to increase the funding for the public schools is always soundly defeated by the Ultra Orthodox majority. Any criticism of its decisions and/or actions, whether justified or not, the school board claims is made because of “antisemitism”. OTOH, any comments the school board makes, the Afro-American community sees as “racist”. Wouldn’t you call that “hypersensitivity” by both sides?

        • You are 100% correct that I do not know for sure you are not a minority. But because of the things you have said here and in other threads, I would bet something significant you are not. I wouldn’t bet my life, but I would bet … something signficant.

          And see, that’s the problem of racial profiling. That’s the problem with you saying “the hypersensitivity of various demographic groups.” In your effort to be PC and not point to a specific race, you made it more so. If you were a minority, you would not think of it as hypersensitivty.

          But everyone is guilty of it, which is why it’s not hypersensitivty. I racially profiled you by assuming you are not a minority. You showed bias by saying those feelings are hypersensitivty. But it doesn’t make it right.

          Microsoft was not intentionally being biased with their software. They fed the software images of faces. They went with the majority, white men, without realizing that the software would struggle to recognize darker-skinned people and women. This was not intentional bias, but it was still bias. Just because no one is at fault doesn’t mean there shouldn’t be an effort to improve it.

      • BTW – It is interesting that you decided to change the discussion about face recognition software and Microsoft into a discussion of “hypersensitivity” and my qualifications to use the term.

Leave a Comment

Yeah! You've decided to leave a comment. That's fantastic! Check out our comment policy here. Let's have a personal and meaningful conversation.

Sponsored Stories