A teenager is suing Apple over a case of mistaken identity. The company’s facial recognition software misidentified him after someone stole his driver’s permit, then proceeded to rob multiple Apple Stores. He’s suing them for $1 billion for his troubles.
But is this teen’s lawsuit frivolous? What type of responsibility does AI bear here? Should facial recognition be used to identify criminals?
Sayak does find this lawsuit frivolous, in fact, he finds it “beyond frivolous.” He believes Apple is only being sued for that amount of money because it was recognized that they have that money to burn.
He added that “facial recognition technology is not perfect, and there will always be a case or two of honest mistakes.” He believes the police should be trained to not just rely on CCTV footage or a database and that it should be realized that this was a very rare circumstance. Anyone who gets caught up in such an arrest, he believes, should just relax and try to work it out calmly.
Phil relates it back to sci-fi and notes that anyone who reads the genre “will tell you it’s not a good idea to let AI and robots take care of anything where human lives or well–being are at stake.” He realizes that more and more jobs are being taken over by robots and AI, yet “as with every machine job, there needs to be human oversight.” While some might ask, “Why buy a dog and bark yourself?” he thinks AI and robotics should be the first line of defense while moving forward with any detected threat should come down to human decisions.
Alex thinks we should use whatever tools we have available to investigate crime, but there should also be corresponding laws to protect the people’s rights. “We need better laws on biometrics desperately, but it’s not a hot political topic at the moment.”
Fabio believes we should use facial recognition to identify criminals “if it means getting them off the streets.”
Andrew also thinks it should be used but not because he supports stronger surveillance or because he wants to give up liberty to get security. However, he realizes it will be used regardless and suggests “we need to develop legal frameworks for using these things right now before it gets out of hand.”
If we have cameras feeding visual data to an AI, and it has the ability to pick out a criminal from a crowd effectively, then “police departments wouldn’t really be doing their job if they didn’t look at it as an option.”
Yet, he doesn’t believe the system should be used to the extent of keeping logs of everyone to possibly identify them. He thinks it should be more transparently governed. “There are cases where artificial constraints have to be placed, and government-administered Face ID is absolutely one of those cases.”
I’m agreeing with most here. Law enforcement would be crazy not to take advantage of this technology. That said, they’re also crazy if they put 100% trust in something using “artificial intelligence.” There needs to be checks and balances, as many indicated, to use both technology and human perspective to decide such cases.
Does this meet with your own thinking? While getting past the point that this is a very frivolous case, should AI be trusted criminally? Should facial recognition be used to identify criminals? Add your thoughts and comments to the discussion below.