Using facial recognition to unlock your phone or tag you in Facebook photos might have enough cool value to outweigh the creep factor for some people, but what about having video footage of you checked against a government database?
Government agencies and the technology companies working with them are optimistic (see FaceFirst and NEC’s cheery, but mildly dystopian, demo videos) about the potential for AI and facial recognition to make law enforcement safer and more efficient, but inaccurate software, documented racial bias in algorithms, and the long-term potential for the development of a mass surveillance apparatus mean that the technology should probably be approached with caution.
How does facial recognition work?
Humans have fingerprints, but they also have “faceprints.” There are dozens of individual data points that can be analyzed about the human face, from the distance between your eyes to your “skinprint.” Facial recognition technology looks at images of faces, analyzes these features, and returns its best guess about who the face belongs to, and it can be extremely accurate if the pictures are high-quality and the system is well-made. The increasing availability and efficiency of artificial intelligence have made it much easier and faster to collect and analyze all this data.
How is law enforcement using it?
Police and government entities are all using facial recognition in basically the same two ways: checking people against a database to find out who they are (and raise red flags if someone is wanted) and using ID photos to actively search for people. The US, the UK, and other countries are experimenting with these technologies on a large scale, but they’ve been slowed by public pushback. Countries like China, on the other hand, don’t need public approval to start experimenting with real-time AI surveillance. Some of the most well-known current projects include:
- Florida, Oregon, and Amazon: Combined with law enforcement data, Amazon’s Rekognition has been used to solve missing persons and trafficking cases, but the pilot programs in Orlando, Florida and Washington County, Oregon are both experimenting with using surveillance camera networks to identify people in real-time and may also put the technology into police body cams.
- Moscow, Russia: Moscow already has a massive CCTV network, and now many of its cameras are getting hooked up to facial recognition software. Their pilot program led to forty-two arrests in its first month and at least one during the 2018 World Cup.
- Singapore: As one of the world’s most tightly-managed countries, perhaps it’s not surprising that Singapore will be the first to integrate facial recognition technology into their lampposts. Up to 100,000 lampposts could soon be equipped with security cameras that keep track of crowds and watch for criminals and terrorists.
- China: Unrestrained by public opinion, China is probably the world leader in mass facial recognition technology. Most famously, their system managed to pick one face out of a crowd of 60,000 at a pop concert and twenty-five suspects at a beer festival, but it’s not just event venues. The cameras also monitor airports, railway stations, crosswalks (jaywalkers are identified and shamed on nearby billboards), in “smart glasses” issued to police officers, and even some public toilets, where it limits how much toilet paper you can use (to prevent theft).
What’s the problem?
Many privacy advocates are worried that even if it’s developed as a well-intentioned technology with safeguards and limitations, governments could easily misuse facial recognition if it’s allowed to grow into any form of mass surveillance.
Large-scale electronic monitoring and tracking systems are becoming fairly standard in most countries, and adding facial recognition to the mix would make them that much more effective at finding and suppressing problematic individuals or groups of people – activists, journalists, political opponents, etc. Facial recognition may even be able to figure out your politics and sexuality, which could clearly be an issue in many places.
Another concern is algorithmic bias, which is a well-documented issue. Some of the most widely-used facial recognition systems have been found to misidentify images of people with very dark or very pale skin at higher-than-average rates, as well as having lower accuracy rates for women.
A misidentification makes it more likely that the wrong person will be investigated and possibly arrested, which is clearly a problem if some people are more likely than others to be targeted. Amazon’s Rekognition system (the one being shopped around to law enforcement agencies in the US) even mistakenly identified twenty-eight members of Congress as criminals, with a disproportionately high number of false positives being African-American.
Putting on a brave new face
As with most technologies in the twenty-first century, it’s not a question of whether they’ll be developed (they definitely will) but how they’ll be used. China’s systems may not be able to track every citizen in real-time just yet, but they may just be a few breakthroughs away from making that into a reality, and the rest of the world can definitely catch up. There’s no question that facial recognition will be part of law enforcement operations everywhere, but keeping the process open, transparent, and legal is important for everyone involved.