In a move that seems like Tom Cruise’s Minority Report film playing out, a tool called Palantir is being used by law enforcement. It was once used to predict where bombs would be placed in Iraq, and now it’s in California collecting data to determine “hot spots” where a larger police presence is needed.
We asked our writers, “Do you think using tech to predict crime is a good idea?”
Alex feels that it doesn’t necessarily have to be like Minority Report. He believes it’s “possible to use big data to predict crime without trying to arrest people for crimes they haven’t committed” if it’s used to decide how to distribute resources that “don’t trample on civil rights.” He stresses that it needs to be understood that it’s just a tool and not a “definitive statement about what will or will not happen.”
Phil‘s concerns aren’t really with the technology so much as “who is in charge of it, how closely would they be regulated, and how much they rely on the machine’s judgement.” He goes back to Minority Report yet again, hoping that those in charge of this intelligence-gathering learn from stories such as that.
Miguel doesn’t believe a real Minority Report situation will ever come into play but believes “it would be useful to employ the assistance of information technology to track patterns of crime” so that the authorities could send help to the areas that may be targeted. He believes Twitter could help with this, feeling they are better at predicting riots than the police.
Fabio feels it’s best not to mix technology and crime prevention.
Kenneth notes that “based on the fact that we’ve seen predictive Artificial Intelligence systems perform activities that mimic human behavior, it’s living proof that there is no limit to what machines can do.” However, he notes that if this technology is predicting based on past crime, there’s no way of gathering data from areas with no record of crime. However, he’d like to see them use it to predict or prevent cyber crimes.
Simon thinks it could work well in tandem with real-life crime-solving. “Leaving it all in the hands of an AI sounds too risky, and I’d hate to see a situation where people simply point to the AI as the judge, jury, and executioner.” However, if the AI could act like a “crime bloodhound” to find danger before it occurs, he can see making a case for it, yet he can also see that power being abused, feeling like police may take more liberties than they should, blaming the AI. He can see it saving lives but can also see innocents getting hurt.
Like Simon, I see the possibilities for harming others as well. I worry about it leading to something similar to racial profiling or like what is happening in the U.S. now with regards to immigration: blocking people from certain countries just because some people in that country want to cause harm, leading to it harming innocents. I worry about predictive crime turning out the same way.
What do you think about “smart crime prevention?” Do you see it ending up like Minority Report? Would it be helpful or harmful? Do you think using tech to predict crime is a good idea? Let us know below in the comments section!