Is Superhuman AI Really as Dangerous as Musk and Hawking Say?

Last updated September 23, 2021
Written by:
Sydney Butler
Sydney Butler
Privacy & Security Writer

Whenever a new technology is introduced or discovery is made, a degree of caution is absolutely warranted. New solutions also tend to create their own problems, which are ultimately a trade-off. Nuclear energy is one good example. On paper, it's much more environmentally friendly than fossil fuel. However, when it goes wrong the scale of the catastrophe is off the charts. So what about "Superhuman AI"?

That's the tech specter several prominent thinkers have come out of the woodwork to warn us all against. What is "superhuman" AI though? I think the common-sense interpretation of the term would be any artificial intelligence that is significantly superior to human intelligence.

The sort is even beyond the type of AI that the literature refers to as "Strong" AI or "General" AI. That is, AI that's more or less on part with the general intelligence of human beings.

The artificial intelligence infused into smartphone cameras, drones and chess-playing computers are of an altogether different variety. That's known as "narrow" or "weak" AI.

These are some very smart people, so when they ring the alarm bells, we should take them seriously. But, how real is this danger? If you're feeling freaked out by the doomsayers of AI, let's take some time to step back and evaluate these warnings calmly.

Who Are These People?!

Elon Musk (CC-BY The Royal Society) & Stephen Hawking (Public Domain Image)

Just so that we are all on the same page, let's first talk about who Elon Musk and Stephen Hawking are in the first place.

Elon Musk is the South African-born entrepreneur who brought us Tesla and SpaceX. The closest thing we have to a real Tony Stark by all accounts. Musk can be a divisive figure, but his influence on modern technological development is undeniable. The man is hardly a Luddite, so his dire warnings of an AI apocalypse isn't borne of ignorance. He's also on board with plenty of Transhumanist thought and has startups looking at things like direct neural interfaces.

Professor Stephen Hawking is practically a household name. The now sadly deceased academic is responsible for a number of major advances in the field of theoretical physics. His towering intellectual achievements are even more impressive given that he had been confined to a wheelchair for most of his life. Without the ability to move more than an eyelid or twitch his lips. Despite this, he as written brilliant public science books such as A Brief History of Time, which should be read by literally everyone. While Hawking was not a technology expert, he was one of our greatest thinkers and so many of his opinions deserve some consideration at least.

What Elon Musk Has to Say

Musk has compared super-intelligent AI to "nuclear weapons", specifically saying that such AIs would be far more dangerous. One of the things that worry him is the idea that "AI experts" may dismiss the idea of super-intelligent AI thanks to hubris. Rather than any sort of concrete evidence.

Musk contends that from where he's sitting, at the cutting edge of AI development, he can clearly see a dark future of rogue AI. According to Elon, AI is already capable of far more than most people realize and is improving at an exponential rate. Which means the transition from relatively benign AI to the dangerous sort he's talking about maybe very sudden.

What Stephen Hawking Had to Say

Hawking has also spoken about super-intelligences coming from other worlds, saying that any contact between us and a culture that knows how to traverse the stars would be a disaster for us. In other words, Hawking is concerned about superhuman intelligence of all types, including super-smart AI.

Hawking was also worried about super-intelligent aliens and augmented superhumans for the same reason. It makes a certain amount of sense from a human perspective. We only have to look at how we have treated other beings that are almost as smart as us. Heck, look at how we've treated other humans! If you apply human morality, at least our practiced morality, to super-smart and powerful aliens or AI, then it's bye-bye humanity!

The Open Letter

All the way back in 2015 Hawking, Musk and many other global intellectuals signed an open letter. This pivotal document clearly outlined the specific worries that this group had when it came to the further development of AI.

Although people tend to focus on the fears raised by the letter, it actually has a lot to say about the potential good AI can do. The point of this open letter was mainly to raise awareness of the dangers they foresaw and to encourage those who are working on AI in good faith to make AI safety a core part of their thinking.

The Two Sides of AI Risk

AI Evolution

In the letter, the writers split the domain of AI risk in two. There are short-term worries that are actually relevant today. Then there are issues which apply to some unknown point in the future.

The short-term concerns are generally valid and deal with immediate dangers that AI may pose. For example, there are talks about creating fully-autonomous military robots that can decide to attack targets. That's a problem on multiple levels, not just a technical one. War zones aren't the only places where AI could potentially make life or death decisions. Self-driving cars are a more mundane example, but operating a vehicle is dangerous. No matter how much safer self-driving cars might prove to be, eventually someone will die. In fact, Elaine Herzberg is already etched into the history books as the first pedestrian death caused by an autonomous vehicle.

These are problems we need to solve as a society in the short- to medium-term. The other side of AI risk is very long-term indeed. Or it may not be. We don't really know, because we have no idea how long it will take to happen. I'm of course referring to superintelligent AI. The basic idea is that our ongoing advancement of AI will eventually lead to an intelligence explosion. Once machines start to improve themselves, they'll gain super intelligence almost instantly. That intelligence is likely to either destroy as a potential threat or destroy us as an afterthought. Either way, humans are gone.

It sounds very scary and since very smart people are saying it, the temptation is to take it as gospel. However, how likely is this second scenario really? While one should never say never, it's worth trying to think about this particular scenario's risk in a lucid way. Let's look at some of the pertinent factors.

Strong AI and Superhuman AI is a Fantasy (For Now)

robot android women

The biggest glass of cold water to pour over these warnings is that super-intelligent artificial intelligence is entirely hypothetical. We don't know if or when we'll see it happen. Even strong AI, which is "merely" as smart as humans doesn't yet exist. It's not just a matter of time or advancement either. The various narrow AI we have right now is powerful, but still, don't compare to even the simplest animals and their evolved brains. We don't yet fully understand how fewer complex brains than ours work, much less the human brain itself. Consciousness is a complete mystery. We don't know how it works and has no idea how to make it happen.

So any serious worries about it are more than a little premature. Of course, in theory, strong AI is possible. After all, we humans exist as matter. Our brains are physical organs. So in principle, it should be possible to artificially replicate human intelligence. The thing is, just because we know something is possible doesn't mean we'll ever figure it out. The tendency is to treat strong and super-intelligent AI as an inevitability, but that's pure speculation at this point.

We'll Have to Live With AI No Matter What

The simple fact is that AI applications, that is narrow AI, is proving so useful that any sort of effort to ban it or strictly regulate it is doomed. Especially since you consider that the nations of the world aren't all on the same page. With AI promising incredible economic benefits through automation and resource management, no country could afford not to invest in it. Unless everyone stays away from developing AI, the countries that dissent will end up dominating everyone.

While people like Elon Musk are only asking for strong regulations, rather than outright banning, it's a fine line to walk. It's going to be even trickier because no one actually knows where the line between weak AI, strong AI and super intelligent AI is. It's hard to set up protective regulations when you have no idea what the parameters of your danger zone are.

Which basically leaves a scenario where we learn to live with AI and try to play the game of keeping it safely under control at least partly by ear.

The Past Doesn't Predict the Future

One great fallacy is the idea that the way things have gone in the past tells us how things will happen in the future. People who are skeptical of current AI technology because they think it will lead to strong or super-intelligent AI have no basis for this. We don't know either qualitatively or quantitatively what this AI looks like. There's no evidence that advancing current narrow AI will eventually birth strong AI or anything beyond that.

In fact, there is a lot we don't know about biological intelligence. We don't know how smart natural beings can become. Perhaps there are alien species out in the universe that is many times smarter than us. Alternatively. for all we know, humans are as smart as you can get under this universe's laws of physics. Since we have no direct observations of beings smarter than us, we can only guess.

AI Might Be the Only Thing That Saves Us

While there are many ways in which these concerns about powerful hypothetical AI technology are legitimate, there's a flip side as well. Humans face a great number of challenges that could simply be beyond us. Climate change and energy management is a good example. Proper oversight of economic systems, diseases we just can't seem to cure and any number of tough scientific and engineering problems might be solved by using AI.

So while Professor Hawking might not be wrong about the potential of superhuman AI being an existential threat to us, not developing our AI tools might lead to the same result!

There's a proposed solution to the Fermi Paradox that could be relevant here. That paradox, in case you don't know, is that fact that in this infinite universe we are having a hard time spotting other intelligent life.

The proposed solution here is that perhaps most intelligent species tend to destroy themselves before they become advanced enough to solve their biggest existential threats. The way things are going now, we're heading for a knife-edge. If our technology improves quickly enough, we may have the tools we need to preserve our species. If not, we'll just join the multitude of intelligent species who took a swing but missed the ball.

No One is Going to See Superhuman AI Coming

For all the warnings about the perils of super AIs, there's no indication that we'd even see it coming. We're working blind insofar as consciousness and intelligence is concerned. Consciousness and intelligence could very well be emergent properties of complexity and architecture. In other words, it's a property of a system. Change one little bit of that system and you get nothing, put things together just right and the result is apotheosis.

Maybe when we understand how that spark of consciousness is ignited in ourselves, it will give us the knowledge to prevent it from happening with our AI as well. More likely, if we ever do understand the physical underpinnings of consciousness the ink won't have time to dry before someone tries to replicate it artificially.

What Are You Supposed to Do About Superhuman AI?

In all this talk about the dangers of future AI, the sort that could end the world, what are you meant to do with this information? Well, the first thing you should not do is lay awake at night worrying about it. We have much bigger short-term problems that can kill us just as readily. So if you are going to expend mental energy being concerned about something, spend it on the environment or peace in the Middle East.

Right, now that we've taken a step back from sheer panic, here's what you can do about AI. The only weapon we have as regular citizens is a democracy. In other words, you get to vote for the people in power based on their policies. If the abuse of AI technology is something that bothers you, tell your representative. If you don't agree with your current government's plans for AI technology, let your voice be heard. While it's probably self-defeating to target all AI development, it may be possible to steer things in a more benign or even benevolent direction.

Do you think superhuman AI will kill us all? Let us know down below in the comments. Lastly, we’d like to ask you to share this article online. And don’t forget that you can follow TechNadu on Facebook and Twitter. Thanks!



For a better user experience we recommend using a more modern browser. We support the latest version of the following browsers: For a better user experience we recommend using the latest version of the following browsers: