AI Researcher Believes It Could Be Disaster for Humanity but Has a Solution

News Ai Disastrous Humanity Featured

Much of what we talk about with regard to technology is how it will affect the future, whether it will be good or bad and how it will change things. That doesn’t just extend to technology users but those who develop it as well.

A leading artificial intelligence researcher, Stuart Russell, is so involved in the technology that he co-wrote a textbook on it. Yet, at the same time, he’s also been warning the public that AI could be disastrous for humanity, but he does have a solution.

Human Compatible

In Russell’s newest book, “Human Compatible,” he explains that AI systems are evaluated by how well they achieve their ultimate objective, whether it’s to win a video game, write text, or solve puzzles. If a strategy fits the objective, they use it.

He believes this system leads to failure because the “objective” that is provided to the AI system is not the only thing that matters. An autonomous car has an “objective” to travel from one place to another, but the lives of passengers and pedestrians matter as well.

In other words, AI systems only care about what we program as the objective, while humans care about so many other things, and that’s what Russell sees as disastrous.

The AI expert talked in a recent interview about these dangers that he describes in his new book.

He talks about the “King Midas” problem of getting everything exactly as you asked for it. “So if it’s a chess program, you give it the goal of beating your opponent or winning the game. If it’s a self-driving car, the passenger puts in the objective, [for instance,] I want to be at the airport,” Russell explained.

“The problem comes when systems become more intelligent. If you put in the wrong objective, then the system pursuing it may take actions that you are extremely unhappy about,” he noted.

News Ai Disatrous Humanity Internal

When asked why we can’t just program all the things we don’t mean, such as not to break any laws, not to murder anyone, etc., Russell compares it to trying to write tax law. Humans always come up with ways around the tax laws.

Russell says, “It doesn’t matter how hard you try to put fences and rules around the behavior of the system. If it’s more intelligent than you are, it finds a way to do what it wants.” And he believes this is going to make things much worse.

This is why he believes we need to abandon the way AI is configured. Instead of specifying a fixed object, Russell believes the AI system should have “a constitutional requirement that it be of benefit to human beings.”

“But it knows that it doesn’t know what that means,” he continues. “It doesn’t know our preferences. And it knows that it doesn’t know our preferences about how the future should unfold.

“So you get totally different behavior. Basically, the machines defer to humans. They ask before doing anything that messes with part of the world.”

AI Misconception

Russell also believes “there’s a general misconception about AI — which is promulgated by Hollywood for reasons of having interesting plots and by the media, because they seem to want to put pictures of Terminator robots on every article [guilty as charged for this article] — which is that the thing we need to be concerned about it consciousness, that somehow these machines will accidentally become conscious, and then they’ll hate everybody and try to kill us.”

Instead, what he feels we should be concerned about is “competent, effective behavior in the world. If machines out-decide us, out-think us in the real world, we have to figure out how do we make sure that they’re only ever acting on our behalf and not acting contrary to our interests.”

What do you think about the current way AI is used? Do you think it will be disastrous for our future? Do you think Russell’s solution will end that possibility? Tell us what you think in a comment below.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox