Skip to main content

Machine Self-awareness

What happens when robots start calling the shots?

Artificial-intelligence (AI) researchers have no doubt that the development of highly intelligent computers and robots that can self-replicate, teach themselves and adapt to different conditions will change the world. Exactly when it will happen, how far it will go, and what we should do about it, however, are cause for debate.

Today’s intelligent machines are for the most part designed to perform specific tasks under known conditions. Tomorrow’s machines, though, could have more autonomy. “As the kinds of tasks that we want machines to perform become more complex, the more we need them to take care of themselves,” says Hod Lipson, a mechanical and computer engineer at Cornell University. The less we can foresee issues, Lipson points out, the more we will need machines to adapt and make decisions on their own. As machines get better at learning how to learn, he says, “I think that leads down the path to consciousness and self-awareness.”

Although neuroscientists debate the biological basis for consciousness, complexity seems to be a key part, suggesting that computers with adaptable and advanced hardware and software might someday become self-aware. One way we will know that machines have attained that cognitive level is that they suddenly wage war on us, if films such as The Terminator are correct. More likely, experts think, we will see it coming.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


That conceit derives from observations of humans. We are unique for having a level of intelligence that enables us to repeatedly “bootstrap” ourselves up to reach ever greater heights, says Selmer Bringsjord, a logician and philosopher at Rensselaer Polytechnic Institute. Whereas animals seem to be locked into an “eternally fixed cognitive prison,” he says, people have the ability to free themselves from their cognitive limitations.

Once a machine can understand its own existence and construction, it can design an improvement for itself. “That’s going to be a really slippery slope,” says Will Wright, creator of the Sims games and co-founder of Berkeley, Calif.–based robotics workshop the Stupid Fun Club. When machine self-awareness first occurs, it will be followed by self-improvement, which is a “critical measurement of when things get interesting,” he adds. Improvements would be made in subsequent generations, which, for machines can pass in only a few hours.

In other words, Wright notes, self-awareness leads to self-replication leads to better machines made without humans involved. “Personally, I’ve always been more scared of this scenario than a lot of others” in regard to the fate of humanity, he says. “This could happen in our lifetime. And once we’re sharing the planet with some form of superintelligence, all bets are off.”

Not everyone is so pessimistic. After all, machines follow the logic of their programming, and if this programming is done properly, Bringsjord says, “the machine isn’t going to get some supernatural power.” One area of concern, he notes, would be the introduction of enhanced machine intelligence to a weapon or fighting machine behind the scenes, where no one can keep tabs on it. Other than that, “I would say we could control the future” by responsible uses of AI, Bringsjord says.

This emergence of more intelligent AI won’t come on “like an alien invasion of machines to replace us,” agrees futurist and prominent author Ray Kurzweil. Machines, he says, will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans’ ability to control or even understand them, he adds.

The legal implications of machines that operate outside of humanity’s control are unclear, so “it’s probably a good idea to think about these things,” Lipson says. Ethical rules such as the late Isaac Asimov’s “three laws of robotics”—which, essentially, hold that a robot may not injure a human or allow a human to be injured—become difficult to obey once robots begin programming one another, removing human input. Asimov’s laws “assume that you program the robot,” Lipson says.

Others, however, wonder if people should even govern this new breed of AI. “Who says that evolution isn’t supposed to go this way?” Wright asks. “Should the dinosaurs have legislated that the mammals not grow bigger and take over more of the planet?” If control turns out to be impossible, let’s hope we can peaceably share the planet with our silicon-based companions.

Larry Greenemeier is the associate editor of technology for Scientific American, covering a variety of tech-related topics, including biotech, computers, military tech, nanotech and robots.

More by Larry Greenemeier
Scientific American Magazine Vol 302 Issue 6This article was originally published with the title “Machine Self-awareness” in Scientific American Magazine Vol. 302 No. 6 (), p. 44
doi:10.1038/scientificamerican0610-44