Should We Be Worried About Artificial Intelligence?

On October 27, 2014, Elon Musk made a statement that surprised some and confirmed what others had suspected: Artificial intelligence is the most serious threat to the survival of the human race. Musk is not necessarily a sensationalist. He’s an educated man with somewhat ambitious business endeavors that are relatively successful. This makes many people take what he says very seriously. As creative and intelligent as he is, however, he can still be wrong. It wouldn’t be the first time someone very respected had a misguided hunch. Is AI really dangerous? Are we ultimately going to spell our own doom?

aidanger-dalek

Regardless of whether Elon Musk is right or wrong about his predictions with AI, we have to recognize one thing: It is definitely possible to create a robotic entity that can cripple us. The possibility is becoming more of a reality as time passes. We’re not here to discredit Elon Musk’s assertion and pass them off as ramblings of a man who simply is overreacting. No. His opinion is valid. We just want to find out whether it’s a certainty.

We’ve seen many examples in war zones of what happens when technology gets in the hands of destructive individuals or groups of individuals. The Messerschmitt Me 262 was quite a force to be reckoned with in World War 2. Werner von Braun, had he stayed in Germany, probably could have led to massive destructive capacity for the Third Reich. Who’s to say that our recent robot-making revolution wouldn’t lead to destructive results from some creators?

aidanger-hawking

While Elon Musk has startled technophiles with his statements at the MIT conference in October, he wasn’t the first major individual to express deep concerns about this. Stephen Hawking, a well-known astrophysicist who relies on computers to communicate due to massive paralysis caused by amyotrophic lateral sclerosis, said in June 2014 that “artificial intelligence could be a real danger in the not too distant future.”

According to him, it’s irrelevant what we do to computer-driven robots to control them. Eventually, they will gain the capacity to design their own improvements and “outsmart us all”.

As far as being the end of humanity is concerned, I have serious doubts that we’ll end up with a SkyNet scenario. Yes, AI can be quite dangerous. Consider the fact that computers aren’t nearly as intelligent as they will be in 2020 and we’re already experiencing problems. The most simple example of this is an elevator that gets stuck between floors. At this moment, the most prominent possibility of danger is an error that ends up having disastrous consequences. Since machines are prone to these, they will malfunction and eventually do things we don’t want them to do.

Advanced robotics may lead us into a future where we would have to be wary of AI. We’re heading in that direction at an alarming pace. Do you think that AI may become dangerous? What do you see happening in the future? We want to listen to your answers to these pressing questions, so submit a comment below with your thoughts!

9 comments

  1. The only (extremely) dangerous artificial intelligence that gives serious pause is that in Washington D.C. There’s a plethora of talk about intelligence there, but damned little evidence of its existence. res ipsa loquitor in spades. Anyone’s neighbor could run the government better. ©2015
    PS. You can write in anyone, including yourself, for any elected position in any election; you do not have chose between truncated mice and jackasses.

  2. No. I’m not worried about artificial intelligence. Computers aren’t smart, they just respond to data that has been entered. All they are, is an oversized faster calculator. Data in, data out. A computer does not reason, think, doubt, question the world around them. large faster calculators/recorders. Put bad data in, you get bad data out. True thinking persons/life forms have reason, and able to make judgment calls(using past experience, guess work, and basic reason). A computer follows the data that has been entered. there isn’t any true thinking or thought involved. People can naturally guess at an answer, naturally lie, cheat or steal. All it is, is a very fast calculator w/ pictures and a large data base. nothing more, nothing less. It does not truly want. Garbage in, garbage out. So, it’s a calculator/ typewriter, large lots of memory, speed, and a fancy screen.

  3. In my opinion, chessspartan phrases the issue incorrectly. I am reminded of the initial attempts to make man fly: there were numerous attempts to fashion birdlike wings under the misguided notion that in order to fly, man had to imitate flying creatures. That has obviously proved a woefully incorrect. Once we got past that notion, the Wright brothers provided a whole new way of thinking about the subject. A century later, we think nothing of placing hundreds of people in airliners thousands of times per day, and fly much further and faster than any bird ever could.

    Similarly, “thinking”: we tend to regard thought as “human-like”; hence many failures in the history of AI, particularly in the first few generations of the discipline. We are finally getting beyond that notion. For example, there’s an AI poker player out of a Canadian university whose algorithms are geared toward working with limited information; its creators claim that it can beat any human poker player in a match (i.e. not a single game of course, but a match consisting of say 20 games.)

    Similarly, Asimov’s “Three Laws of Robotics” sidesteps the most important question: given a situation in which two people or armies etc. are at war, how does the AI choose which side to assist and defend?

    • “Asimov’s “Three Laws of Robotics” sidesteps the most important question: given a situation in which two people or armies etc. are at war, how does the AI choose which side to assist and defend?”
      The assumption is that if humanity is technologically advanced enough to create a positronic brain, it is emotionally advanced enough to forego the pleasure of war. At the present time, that seems like wishful thinking on Asimov’s part. But then isn’t any kind of forecasting or extrapolating into the future, wishful thinking?

    • actually you’re right but you forgot a few on the computers .. Deep Blue IBM beats Gary Kasparov at chess back in ’97 or Watson on Jeopardy just a bit more recently

  4. “Should We Be Worried About Artificial Intelligence?
    Only if it becomes smarter than us and realizes that we humans are too error-prone and therefore too dangerous to the AI to be allowed to exist. In the opinion of AI we may be superfluous.

    “We just want to find out whether it’s a certainty.”
    Unfortunately, unless we invent time travel and go into the future, we will never know with 100% certainty the results our current actions cause. We can only guess and assign probabilities.

    “Eventually, they will gain the capacity to design their own improvements and “outsmart us all”.”
    Apparently Prof. Hawking is discounting the eventual possibility of Asimov’s Three Laws of Robotics.

    “As far as being the end of humanity is concerned, I have serious doubts that we’ll end up with a SkyNet scenario. ”
    We could end up with Fred Saberhagen’s Berserker scenarion. It’s a toss-up which scenarion is better/worse, depending on your point of view.

  5. I don’t think AI can pose any treat at all. what you input is what it gives you….
    @chessspartan. you have said it all….

  6. A true AI will need to be treated as the sentient being it is, subject to the same rights and limitations as the rest of us. Granted, that can sometimes lead to wrong decisions (such as HAL’s pulling the plug on the crew of the Discovery because he got conflicting information and felt threatened). Dave Bowman did the right thing by removing HAL’s higher functions, essentially imprisoning HAL. So, if you don’t want those decisions to be made by an AI, then don’t put the AI in charge. Limit it to advisory status only and let a human make the decision. Any person or institution or government that makes a decision to put a life into the hands of an AI is ultimately liable for the decisions made by that AI.

  7. As an “AI developer”, YES, AI can be “dangerous” depending on the intent or stupidity of the developer. If one is able to develop AI systems, they should be able to calculate the possibilities, along with the limits. If an AI system has a larger scope that it encompasses, it would be in the best interest of the developer to calculate the scope of possible actions. But if the AI system has a far greater range of functionality within a certain scope of activity; it’s still the responsibility for the developer to calculate ALL the bad scenarios and do something about it. An AI system is only as good or bad as the developer. AI systems are normally based on statistical calculations; “exceptions” need to programmed in. The “simple” AI systems are not really AI and just Automation systems based on Rules. As for the SkyNet system on Terminator, the part of NOT destroying humanity was obviously NOT included in the program, which was the fault of the developer.

Comments are closed.

Sponsored Stories