Should We Be Worried About Artificial Intelligence?

On October 27, 2014, Elon Musk made a statement that surprised some and confirmed what others had suspected: Artificial intelligence is the most serious threat to the survival of the human race. Musk is not necessarily a sensationalist. He’s an educated man with somewhat ambitious business endeavors that are relatively successful. This makes many people take what he says very seriously. As creative and intelligent as he is, however, he can still be wrong. It wouldn’t be the first time someone very respected had a misguided hunch. Is AI really dangerous? Are we ultimately going to spell our own doom?

aidanger-dalek

Regardless of whether Elon Musk is right or wrong about his predictions with AI, we have to recognize one thing: It is definitely possible to create a robotic entity that can cripple us. The possibility is becoming more of a reality as time passes. We’re not here to discredit Elon Musk’s assertion and pass them off as ramblings of a man who simply is overreacting. No. His opinion is valid. We just want to find out whether it’s a certainty.

We’ve seen many examples in war zones of what happens when technology gets in the hands of destructive individuals or groups of individuals. The Messerschmitt Me 262 was quite a force to be reckoned with in World War 2. Werner von Braun, had he stayed in Germany, probably could have led to massive destructive capacity for the Third Reich. Who’s to say that our recent robot-making revolution wouldn’t lead to destructive results from some creators?

aidanger-hawking

While Elon Musk has startled technophiles with his statements at the MIT conference in October, he wasn’t the first major individual to express deep concerns about this. Stephen Hawking, a well-known astrophysicist who relies on computers to communicate due to massive paralysis caused by amyotrophic lateral sclerosis, said in June 2014 that “artificial intelligence could be a real danger in the not too distant future.”

According to him, it’s irrelevant what we do to computer-driven robots to control them. Eventually, they will gain the capacity to design their own improvements and “outsmart us all”.

As far as being the end of humanity is concerned, I have serious doubts that we’ll end up with a SkyNet scenario. Yes, AI can be quite dangerous. Consider the fact that computers aren’t nearly as intelligent as they will be in 2020 and we’re already experiencing problems. The most simple example of this is an elevator that gets stuck between floors. At this moment, the most prominent possibility of danger is an error that ends up having disastrous consequences. Since machines are prone to these, they will malfunction and eventually do things we don’t want them to do.

Advanced robotics may lead us into a future where we would have to be wary of AI. We’re heading in that direction at an alarming pace. Do you think that AI may become dangerous? What do you see happening in the future? We want to listen to your answers to these pressing questions, so submit a comment below with your thoughts!