AI Researcher Believes It Could Be Disaster for Humanity but Has a Solution

News Ai Disastrous Humanity Featured

Much of what we talk about with regard to technology is how it will affect the future, whether it will be good or bad and how it will change things. That doesn’t just extend to technology users but those who develop it as well.

A leading artificial intelligence researcher, Stuart Russell, is so involved in the technology that he co-wrote a textbook on it. Yet, at the same time, he’s also been warning the public that AI could be disastrous for humanity, but he does have a solution.

Human Compatible

In Russell’s newest book, “Human Compatible,” he explains that AI systems are evaluated by how well they achieve their ultimate objective, whether it’s to win a video game, write text, or solve puzzles. If a strategy fits the objective, they use it.

He believes this system leads to failure because the “objective” that is provided to the AI system is not the only thing that matters. An autonomous car has an “objective” to travel from one place to another, but the lives of passengers and pedestrians matter as well.

In other words, AI systems only care about what we program as the objective, while humans care about so many other things, and that’s what Russell sees as disastrous.

The AI expert talked in a recent interview about these dangers that he describes in his new book.

He talks about the “King Midas” problem of getting everything exactly as you asked for it. “So if it’s a chess program, you give it the goal of beating your opponent or winning the game. If it’s a self-driving car, the passenger puts in the objective, [for instance,] I want to be at the airport,” Russell explained.

“The problem comes when systems become more intelligent. If you put in the wrong objective, then the system pursuing it may take actions that you are extremely unhappy about,” he noted.

News Ai Disatrous Humanity Internal

When asked why we can’t just program all the things we don’t mean, such as not to break any laws, not to murder anyone, etc., Russell compares it to trying to write tax law. Humans always come up with ways around the tax laws.

Russell says, “It doesn’t matter how hard you try to put fences and rules around the behavior of the system. If it’s more intelligent than you are, it finds a way to do what it wants.” And he believes this is going to make things much worse.

This is why he believes we need to abandon the way AI is configured. Instead of specifying a fixed object, Russell believes the AI system should have “a constitutional requirement that it be of benefit to human beings.”

“But it knows that it doesn’t know what that means,” he continues. “It doesn’t know our preferences. And it knows that it doesn’t know our preferences about how the future should unfold.

“So you get totally different behavior. Basically, the machines defer to humans. They ask before doing anything that messes with part of the world.”

AI Misconception

Russell also believes “there’s a general misconception about AI — which is promulgated by Hollywood for reasons of having interesting plots and by the media, because they seem to want to put pictures of Terminator robots on every article [guilty as charged for this article] — which is that the thing we need to be concerned about it consciousness, that somehow these machines will accidentally become conscious, and then they’ll hate everybody and try to kill us.”

Instead, what he feels we should be concerned about is “competent, effective behavior in the world. If machines out-decide us, out-think us in the real world, we have to figure out how do we make sure that they’re only ever acting on our behalf and not acting contrary to our interests.”

What do you think about the current way AI is used? Do you think it will be disastrous for our future? Do you think Russell’s solution will end that possibility? Tell us what you think in a comment below.

Laura Tucker Laura Tucker

Laura has spent nearly 20 years writing news, reviews, and op-eds, with more than 10 of those years as an editor as well. She has exclusively used Apple products for the past three decades. In addition to writing and editing at MTE, she also runs the site's sponsored review program.


  1. “Russell believes the AI system should have “a constitutional requirement that it be of benefit to human beings.””
    Sounds like Russell is talking about Asimov’s Three Laws of Robotics.

    I was under the impression that the aim of AI was to replicate, or at least approximate, the reasoning capabilities of a human. Hence Artificial INTELLIGENCE.

    “AI Misconception”
    I think the misconception everyone is making, including Russell, is that they are conflating AI with anthropomorphic robots. Neither robots not AI have to have a human form. In fact, human form may impede the functionality. AI can and will exist/function without robots and robots can and do work without AI. When they do come together, it will be in a shape of an android such as Mr. Data from Star Trek or Bishop from Alien.

    “somehow these machines will accidentally become conscious”
    Not accidentally, by design and by evolution. There is no intelligence without consciousness and self-awareness. To become true Artificial Intelligence, it has to be conscious.

    “then they’ll hate everybody and try to kill us”
    The same can be said of children. AI needs to undergo the same process of socialization as human beings do while growing up. Children that grow up in an improper environment cane and do become killers. Even though AI is a complicated bunch of electronics, it is supposed to emulate human intelligence and as such, it has go through the same learning process as humans do.

    “Do you think it will be disastrous for our future?”
    It MAY be but not for the reasons Russell mentions. If AI becomes too human-like, it may exhibit some (or many) of the less desirable human traits. Since AI will be much more durable than humans, logical, able to think faster, require no sustenance, it may decide that it is a superior species that can do pretty much as it pleases with any inferior species. (ubermenschen-untermenschen) Humans feel pretty much that way about the animals species. Animals are here for our convenience, to serve us.

    1. A computer (as we know them now) can only do what it’s programmed for. We are making them faster and have more storage but they are still just as dumb as we tell them to be. Without humans it’s still just a piece of hardware like a toaster.

  2. My point is that the fear is unfounded. AI is no smarter than the signal light changer at an intersection. Give it a job to do and it will do it, it’s not going to change it’s mind and do something else. The faster a computer gets and the more storage it has the more jobs you can give it and those jobs will be carried out in what seems like to us in a seamless false sentient way. I my opinion we are light years away from creating a machine that can actually think on its own. As for now we are still the brains behind the AI.

    1. You are confusing AI with programmable robots. As you have pointed out, computers and programmable robots/devices can only do what they are programmed to do. No matter how sophisticated they are, they are as intelligent as the lumps of metal they were made from. Even though they can spew out information like a fire hose, computers are as intelligent an encyclopedia, meaning not at all. A newborn baby is more intelligent than the most complex and advanced computer.

      AI (Artificial Intelligence) is a self-aware, conscious “computer” that simulates biological intelligence. Just like a human, AI can make decisions based on incomplete data, it can learn from past experiences, it can adapt instantly when conditions conditions change. IOW, it can think like a human.

      “I my opinion we are light years away from creating a machine that can actually think on its own.”
      Agreed, to an extent. Before we can build Mr. Data, someone will have to invent a positronic brain. However, we already have rudimentary machines that can think for themselves. (Deep Blue) If self-driving cars are to become a reality, a machine that can think for itself (AI) needs to be developed right quick because a robot cannot cope with all the variables involved in running a car in traffic. A robot does not have the capacity/capability to react to all the random stupid human tricks.

  3. I agree that robots and AI are separate ideas. But, do you think AI will spontaneously create itself or will it be a product of human creation? I just don’t see the leap from robot to AI. If it is a creation of man can it ever truly be sentient? We still don’t know how we work. I don’t think we are there yet or in the foreseeable future. For me it’s an interesting and even fascinating subject, but still science fiction.

    1. ” do you think AI will spontaneously create itself”
      Yes. One of these days, some scientist/tinkerer will build a proof-of-concept self-replicating device. Once (s)he does, the flood gates will open. It probably won’t happen this year or next but it WILL happen sooner than later. We already have self-diagnosing and self-fixing machines.

      ” in the foreseeable future”
      Please quantify ‘foreseeable’. A month? A year? A century? A millennium?
      For some projects/inventions/discoveries, it may be only months. For others (transporter beams, inter-galactic travel) it’s probably centuries.

      “For me it’s an interesting and even fascinating subject, but still science fiction.”
      You make it sound like science fiction is the same as fantasy. Ever read Jules Verne? With few exceptions (submarines) pretty much most of the technology he wrote about has become everyday. In 1945, Arthur C. Clarke postulated a system of satellites in geosynchronous orbit. Science fiction of yesterday is the science of today and commonplace of tomorrow. Maybe Barsoom is imaginary but human colonies on the Moon and Mars are only a matter of logistics.

Comments are closed.